Вы находитесь на странице: 1из 764

FN Clarivate Analytics Web of Science

VR 1.0
PT J
AU Taniguchi, A
Taniguchi, T
Cangelosi, A
AF Taniguchi, Akira
Taniguchi, Tadahiro
Cangelosi, Angelo
TI Cross-Situational Learning with Bayesian Generative Models for
Multimodal Category and Word Learning in Robots
SO FRONTIERS IN NEUROROBOTICS
LA English
DT Article
DE Bayesian model; cross-situational learning; lexical acquisition;
multimodal categorization; symbol grounding; word meaning
ID LANGUAGE; ACQUISITION; MEANINGS; MEMORY
AB In this paper, we propose a Bayesian generative model that can form multiple
categories based on each sensory-channel and can associate words with any of the
four sensory-channels (action, position, object, and color). This paper focuses on
cross-situational learning using the co-occurrence between words and information of
sensory-channels in complex situations rather than conventional situations of
cross-situational learning. We conducted a learning scenario using a simulator and
a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that
describes an object of visual attention and an accompanying action to the robot.
The scenario was set as follows: the number of words per sensory-channel was three
or four, and the number of trials for learning was 20 and 40 for the simulator and
25 and 40 for the real robot. The experimental results showed that the proposed
method was able to estimate the multiple categorizations and to learn the
relationships between multiple sensory-channels and words accurately. In addition,
we conducted an action generation task and an action description task based on word
meanings learned in the cross-situational learning scenario. The experimental
results showed that the robot could successfully use the word meanings learned by
using the proposed method.
C1 [Taniguchi, Akira; Taniguchi, Tadahiro] Ritsumeikan Univ, Emergent Syst Lab,
Kusatsu, Japan.
[Cangelosi, Angelo] Plymouth Univ, Ctr Robot & Neural Syst, Plymouth, Devon,
England.
RP Taniguchi, A (reprint author), Ritsumeikan Univ, Emergent Syst Lab, Kusatsu,
Japan.
EM a.taniguchi@em.ci.ritsumei.ac.jp
FU JST CREST; JSPS KAKENHI [JP17J07842]; Air Force Office of Scientific
Research [A9550-15-1-0025]; Air Force Materiel Command, USAF
[FA9550-15-1-0025]; EU H Marie Skodowska-Curie European Industrial
Doctorate APRIL [674868]
FX This work was partially supported by JST CREST and JSPS KAKENHI Grant
Number JP17J07842. This material is partially based upon work supported
by the Air Force Office of Scientific Research grant number:
A9550-15-1-0025, Air Force Materiel Command, USAF under Award No.
FA9550-15-1-0025, and by the EU H2020 Marie Skodowska-Curie European
Industrial Doctorate APRIL (674868).
CR Aly A., 2017, P JOINT IEEE INT C D
Ando Y, 2013, IEEE INT C INT ROBOT, P2272, DOI 10.1109/IROS.2013.6696674
Attamimi M, 2016, ADV ROBOTICS, V30, P806, DOI 10.1080/01691864.2016.1172507
Griffiths T. L., 2003, P NEUR INF PROC SYST, P17
Brown P. F., 1993, Computational Linguistics, V19, P263
Cangelosi A, 2015, INTELL ROBOT AUTON, P1
Celikkanat H., 2014, P JOINT IEEE INT C D
Chen Y., 2016, P JOINT IEEE INT C D
Fontanari JF, 2009, NEURAL NETWORKS, V22, P579, DOI 10.1016/j.neunet.2009.06.010
Fox EB, 2011, ANN APPL STAT, V5, P1020, DOI 10.1214/10-AOAS395
Frank M. C., 2007, P ADV NEUR INF PROC, P457
Frank MC, 2009, PSYCHOL SCI, V20, P578, DOI 10.1111/j.1467-9280.2009.02335.x
Goldwater S, 2009, COGNITION, V112, P21, DOI 10.1016/j.cognition.2009.03.008
Hagiwara Y., 2016, P 13 IFAC IFIP IFORS
HARNAD S, 1990, PHYSICA D, V42, P335, DOI 10.1016/0167-2789(90)90087-6
Heath S, 2016, IEEE T COGN DEV SYST, V8, P3, DOI 10.1109/TAMD.2015.2442619
Heymann J., 2014, P INT C AC SPEECH SI
Hinaut X, 2014, FRONT NEUROROBOTICS, V8, P1, DOI 10.3389/fnbot.2014.00016
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI 10.1162/neco.1997.9.8.1735
Hornstein J, 2010, STUD COMPUT INTELL, V264, P467
HUBERT L, 1985, J CLASSIF, V2, P193, DOI 10.1007/BF01908075
Imai M, 2007, COGNITIVE SCI, V31, P385, DOI 10.1080/15326900701326436
Jia Y, 2014, ARXIV14085093
Krizhevsky A, 2012, ADV NEURAL INFORM PR, V25, P1097
MARKMAN EM, 1984, COGNITIVE PSYCHOL, V16, P1, DOI 10.1016/0010-0285(84)90002-1
MARKMAN EM, 1988, COGNITIVE PSYCHOL, V20, P121, DOI 10.1016/0010-0285(88)90017-5
Matuszek C, 2012, P 29 INT C MACH LEAR, P1671
Morse AF, 2010, COGNITION IN FLUX, P1362
Murphy KP, 2012, MACHINE LEARNING PRO
Nakamura T., 2016, P IROS WORKSH MACH L
Nakamura T, 2011, IEEE INT C INT ROBOT, P1520, DOI 10.1109/IROS.2011.6048371
Nakamura T, 2011, ADV ROBOTICS, V25, P2189, DOI 10.1163/016918611X595035
Pointeau G, 2014, IEEE T AUTON MENT DE, V6, P200, DOI 10.1109/TAMD.2014.2307342
Qu S., 2008, P C EMP METH NAT LAN, P244
Qu SL, 2010, J ARTIF INTELL RES, V37, P247
Roy DK, 2002, COGNITIVE SCI, V26, P113, DOI 10.1016/S0364-0213(01)00061-1
SETHURAMAN J, 1994, STAT SINICA, V4, P639
Smith K, 2011, COGNITIVE SCI, V35, P480, DOI 10.1111/j.1551-6709.2010.01158.x
Smith L. B., 2010, SPATIAL FDN LANGUAGE, P188, DOI
[10.1093/acprof:oso/9780199553242.001.0001, DOI
10.1093/ACPROF:OSO/9780199553242.001.0001]
Spranger M., 2015, P 24 INT JOINT C ART, P1909
Spranger M, 2015, 5TH INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND
ON EPIGENETIC ROBOTICS (ICDL-EPIROB), P196, DOI 10.1109/DEVLRN.2015.7346140
Steels L., 2012, LANGUAGE GROUNDING R
Stramandinoli F, 2017, AUTON ROBOT, V41, P367, DOI 10.1007/s10514-016-9587-8
Sugiura K, 2011, ADV ROBOTICS, V25, P825, DOI 10.1163/016918611X563328
Taniguchi A., 2016, P 13 IFAC IFIP IFORS
Taniguchi A, 2016, IEEE T COGN DEV SYST, V8, P285, DOI 10.1109/TCDS.2016.2565542
Taniguchi T, 2016, ADV ROBOTICS, V30, P706, DOI 10.1080/01691864.2016.1164622
Taniguchi T, 2016, ADV ROBOTICS, V30, P770, DOI 10.1080/01691864.2016.1159981
Tikhanoff V., 2008, P 8 WORKSH PERF METR, P57, DOI DOI 10.1145/1774674.1774684
TOMASELLO M, 1986, CHILD DEV, V57, P1454, DOI 10.1111/j.1467-8624.1986.tb00470.x
Twomey KE, 2016, INTERACT STUD, V17, P101, DOI 10.1075/is.17.1.05two
Yamada T, 2016, FRONT NEUROROBOTICS, V10, DOI 10.3389/fnbot.2016.00005
Yamada T, 2015, IEEE INT C INT ROBOT, P4179, DOI 10.1109/IROS.2015.7353968
Zhong J, 2017, P INT JOINT C NEUR N
NR 54
TC 0
Z9 0
U1 0
U2 0
PU FRONTIERS MEDIA SA
PI LAUSANNE
PA PO BOX 110, EPFL INNOVATION PARK, BUILDING I, LAUSANNE, 1015,
SWITZERLAND
SN 1662-5218
J9 FRONT NEUROROBOTICS
JI Front. Neurorobotics
PD DEC 19
PY 2017
VL 11
AR 66
DI 10.3389/fnbot.2017.00066
PG 19
WC Computer Science, Artificial Intelligence; Robotics; Neurosciences
SC Computer Science; Robotics; Neurosciences & Neurology
GA FQ3ZV
UT WOS:000418298300001
PM 29311888
OA gold
DA 2018-01-22
ER

PT J
AU Ikeda, T
Hirata, M
Kasaki, M
Alimardani, M
Matsushita, K
Yamamoto, T
Nishio, S
Ishiguro, H
AF Ikeda, Takashi
Hirata, Masayuki
Kasaki, Masashi
Alimardani, Maryam
Matsushita, Kojiro
Yamamoto, Tomoyuki
Nishio, Shuichi
Ishiguro, Hiroshi
TI Subthalamic nucleus detects unnatural android movement
SO SCIENTIFIC REPORTS
LA English
DT Article
ID PARKINSONS-DISEASE; UNCANNY VALLEY; BETA OSCILLATIONS; MODULATION;
ATTENTION; CIRCUITS; ATLAS; AREA
AB An android, i.e., a realistic humanoid robot with human-like capabilities, may
induce an uncanny feeling in human observers. The uncanny feeling about an android
has two main causes: its appearance and movement. The uncanny feeling about an
android increases when its appearance is almost human-like but its movement is not
fully natural or comparable to human movement. Even if an android has human-like
flexible joints, its slightly jerky movements cause a human observer to detect
subtle unnaturalness in them. However, the neural mechanism underlying the
detection of unnatural movements remains unclear. We conducted an fMRI experiment
to compare the observation of an android and the observation of a human on which
the android is modelled, and we found differences in the activation pattern of the
brain regions that are responsible for the production of smooth and natural
movement. More specifically, we found that the visual observation of the android,
compared with that of the human model, caused greater activation in the subthalamic
nucleus (STN). When the android's slightly jerky movements are visually observed,
the STN detects their subtle unnaturalness. This finding suggests that the
detection of unnatural movements is attributed to an error signal resulting from a
mismatch between a visual input and an internal model for smooth movement.
C1 [Ikeda, Takashi] Kanazawa Univ, Res Ctr Child Mental Dev, Kanazawa, Ishikawa,
Japan.
[Ikeda, Takashi; Hirata, Masayuki; Matsushita, Kojiro] Osaka Univ, Global Ctr
Med Engn & Informat, Endowed Res Dept Clin Neuroengn, Suita, Osaka, Japan.
[Hirata, Masayuki] Osaka Univ, Med Sch, Dept Neurosurg, Suita, Osaka, Japan.
[Hirata, Masayuki; Yamamoto, Tomoyuki; Ishiguro, Hiroshi] Natl Inst Informat &
Commun Technol, Ctr Informat & Neural Networks CiNet, Suita, Osaka, Japan.
[Hirata, Masayuki; Yamamoto, Tomoyuki; Ishiguro, Hiroshi] Osaka Univ, Suita,
Osaka, Japan.
[Kasaki, Masashi; Alimardani, Maryam; Ishiguro, Hiroshi] Osaka Univ, Grad Sch
Engn Sci, Toyonaka, Osaka, Japan.
[Kasaki, Masashi] Nagoya Univ, Inst Liberal Arts & Sci, Nagoya, Aichi, Japan.
[Alimardani, Maryam] Univ Tokyo, Grad Sch Arts & Sci, Tokyo, Japan.
[Matsushita, Kojiro] Gifu Univ, Grad Sch Engn, Gifu, Japan.
[Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi] Adv Telecommun Res Inst
Int, Kyoto, Japan.
RP Hirata, M (reprint author), Osaka Univ, Global Ctr Med Engn & Informat, Endowed
Res Dept Clin Neuroengn, Suita, Osaka, Japan.; Hirata, M (reprint author), Osaka
Univ, Med Sch, Dept Neurosurg, Suita, Osaka, Japan.; Hirata, M (reprint author),
Natl Inst Informat & Commun Technol, Ctr Informat & Neural Networks CiNet, Suita,
Osaka, Japan.; Hirata, M (reprint author), Osaka Univ, Suita, Osaka, Japan.
EM mhirata@cne-osaka.org
OI Ikeda, Takashi/0000-0001-9550-5033
FU JSPS KAKENHI [25220004]; ImPACT Program of the Council for Science,
Technology and Innovation (Cabinet Office, Government of Japan); Global
COE Program of the "Centre of Human-Friendly Robotics Based on Cognitive
Neuroscience" from the Ministry of Education, Culture, Sports, Science
and Technology, Japan
FX This research was supported in part by the Global COE Program of the
"Centre of Human-Friendly Robotics Based on Cognitive Neuroscience" from
the Ministry of Education, Culture, Sports, Science and Technology,
Japan, by the ImPACT Program of the Council for Science, Technology and
Innovation (Cabinet Office, Government of Japan), and by JSPS KAKENHI
Grant Number 25220004.
CR ALEXANDER GE, 1994, J CLIN NEUROPHYSIOL, V11, P420, DOI 10.1097/00004691-
199407000-00004
Becker-Asano C., 2011, J ARTIF INTELL SOFT, V1, P215
Brett M., 2002, 8 INT C FUNCT MAPP H
Brunenberg EJL, 2012, PLOS ONE, V7, DOI 10.1371/journal.pone.0039061
Burleigh T. J., 2015, FRONT PSYCHOL, V5
Cheetham M, 2011, FRONT HUM NEUROSCI, V5, DOI 10.3389/fnhum.2011.00126
EKMAN P, 1982, J NONVERBAL BEHAV, V6, P238, DOI 10.1007/BF00987191
Foffani G, 2005, J PHYSIOL-LONDON, V568, P699, DOI 10.1113/jphysiol.2005.089722
Gervais-Bernard H, 2009, J NEUROL, V256, P225, DOI 10.1007/s00415-009-0076-2
Heimer L, 1983, HUMAN BRAIN SPINAL C, P199
Katsyri J, 2015, FRONT PSYCHOL, V6, DOI 10.3389/fpsyg.2015.00390
Krack P, 2003, NEW ENGL J MED, V349, P1925, DOI 10.1056/NEJMoa035275
Kuhn AA, 2006, BRAIN, V129, P695, DOI 10.1093/brain/awh715
Kuhn AA, 2004, BRAIN, V127, P735, DOI 10.1093/brain/awh106
Kumazaki H, 2017, FRONT PSYCHIATRY, V8, DOI 10.3389/fpsyt.2017.00169
Lane RD, 1999, NEUROPSYCHOLOGIA, V37, P989, DOI 10.1016/S0028-3932(99)00017-2
Lopez-Azcarate J, 2010, J NEUROSCI, V30, P6667, DOI 10.1523/JNEUROSCI.5459-
09.2010
Maldjian JA, 2003, NEUROIMAGE, V19, P1233, DOI 10.1016/S1053-8119(03)00169-1
Maldjian JA, 2004, NEUROIMAGE, V21, P450, DOI 10.1016/j.neuroimage.2003.09.032
Marceglia S, 2009, NEUROSCIENCE, V161, P1027, DOI
10.1016/j.neuroscience.2009.04.018
Mink J W, 1993, Curr Opin Neurobiol, V3, P950, DOI 10.1016/0959-4388(93)90167-W
Mori M, 2012, IEEE ROBOT AUTOM MAG, V19, P98, DOI 10.1109/MRA.2012.2192811
Muckli L, 2013, CURR OPIN NEUROBIOL, V23, P195, DOI 10.1016/j.conb.2013.01.020
Nambu A, 2002, NEUROSCI RES, V43, P111, DOI 10.1016/S0168-0102(02)00027-5
Osaka N, 2012, PLOS ONE, V7, DOI 10.1371/journal.pone.0049053
Piwek L, 2014, COGNITION, V130, P271, DOI 10.1016/j.cognition.2013.11.001
Rosenthal-von der Putten AM, 2014, INT J SOC ROBOT, V6, P67, DOI 10.1007/s12369-
013-0198-7
Saygin AP, 2012, SOC COGN AFFECT NEUR, V7, P413, DOI 10.1093/scan/nsr025
Schindler Sebastian, 2017, Sci Rep, V7, P45003, DOI 10.1038/srep45003
Tan HL, 2014, J NEUROSCI, V34, P16744, DOI 10.1523/JNEUROSCI.3414-14.2014
Thompson JC, 2011, PERCEPTION, V40, P695, DOI 10.1068/p6900
Vuilleumier P, 2005, TRENDS COGN SCI, V9, P585, DOI 10.1016/j.tics.2005.10.011
NR 32
TC 0
Z9 0
U1 0
U2 0
PU NATURE PUBLISHING GROUP
PI LONDON
PA MACMILLAN BUILDING, 4 CRINAN ST, LONDON N1 9XW, ENGLAND
SN 2045-2322
J9 SCI REP-UK
JI Sci Rep
PD DEC 19
PY 2017
VL 7
AR 17851
DI 10.1038/s41598-017-17849-2
PG 7
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA FQ4WQ
UT WOS:000418359600101
PM 29259217
OA gold
DA 2018-01-22
ER

PT J
AU Laumond, J
Benallegue, M
Carpentier, J
Berthoz, A
AF Laumond, Jean-Paul
Benallegue, Mehdi
Carpentier, Justin
Berthoz, Alain
TI The Yoyo-Man
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE Human locomotion; bipedal locomotion; humanoid robotics; synergies;
biomechanics; passive walker
ID HUMAN LOCOMOTION; SPINAL-CORD; WALKING; HEAD; ROBOT; BODY; GAIT; NECK;
DYNAMICS; WALKERS
AB The paper reports on two results issued from a multidisciplinary research action
exploring the motor synergies of anthropomorphic walking. By combining the
biomechanical, neurophysiology, and robotics perspectives, it is intended to better
understand human locomotion with the ambition to better design bipedal robot
architectures. The motivation of the research starts from the simple observation
that humans may stumble when following a simple reflex-based locomotion on uneven
terrains. The rationale combines two well established results in robotics and
neuroscience, respectively:
passive robot walkers, which are very efficient in terms of energy consumption,
can be modeled by a simple rotating rimless wheel;
humans and animals stabilize their head when moving.
The seminal hypothesis is then to consider a wheel equipped with a stabilized
mass on top of it as a plausible model of bipedal walking. The two results
presented in the paper support the hypothesis.
1. From a motion capture data basis of twelve human walkers, we show that the
motions of the feet are organized around a geometric center, which is the center of
mass, and is surprisingly not at the hip.
2. After introducing a ground texture model that allows us to quantify the
stability performance of walker control schemes, we show how compass-like passive
walkers are better controlled when equipped with a stabilized 2-degree-of-freedom
moving mass on top of them.
The center of mass and head then play complementary roles that define what we
call the Yoyo-Man. Beyond the two results presented in the paper, the Yoyo-Man
model opens new perspectives to explore the computational foundations of
anthropomorphic walking.
C1 [Laumond, Jean-Paul; Carpentier, Justin] Univ Toulouse, LAAS CNRS, CNRS,
Toulouse, France.
[Benallegue, Mehdi] Natl Inst Adv Ind Sci & Technol, Tsukuba, Ibaraki, Japan.
[Berthoz, Alain] Coll France, Paris, France.
RP Laumond, J (reprint author), Univ Toulouse, LAAS CNRS, 7 Ave Colonel Roche BP
54200, F-31031 Toulouse 4, France.
EM jpl@laas.fr
FU European Research Council (ERC) through the Actanthrope project [ERC-ADG
340050]; European project KOROIBOT [FP7/611909]
FX The author(s) disclosed receipt of the following financial support for
the research, authorship, and/or publication of this article: The work
is supported by the European Research Council (ERC) through the
Actanthrope project (ERC-ADG 340050) and by the European project
KOROIBOT (FP7/611909).
CR Ackerman J, 2013, IEEE T ROBOT, V29, P321, DOI 10.1109/TRO.2012.2235698
Akutsu Y, 2014, IEEE INT C INT ROBOT, P4855, DOI 10.1109/IROS.2014.6943252
Alexander RM, 2005, SCIENCE, V308, P58, DOI 10.1126/science.1111110
Arechavaleta G, 2008, AUTON ROBOT, V25, P25, DOI 10.1007/s10514-007-9075-2
Armstrong HG, 1988, TECHNICAL REPORT
ASANO F, 2008, IEEE RSJ IROS SEPT, P2934
Asano F, 2014, MULTIBODY SYST DYN, V31, P111, DOI 10.1007/s11044-013-9367-6
Benallegue M, 2015, HEAD NECK SYST WALK
Benallegue M, 2013, ROB SCI SYST C BERL
Bernstein N, 1967, COORDINATION REGULAT
Berthoz A, 2002, BRAINS SENSE MOVEMEN
Bove M, 2002, J NEUROPHYSIOL, V88, P2232, DOI 10.1152/jn.00198.2002
Byl K, 2009, INT J ROBOT RES, V28, P1040, DOI 10.1177/0278364909340446
Carpentier J, 2016, IEEE T ROBOT, V32, P810, DOI 10.1109/TRO.2016.2572680
Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Collins SH, 2001, INT J ROBOT RES, V20, P607, DOI 10.1177/02783640122067561
Dietz V, 2003, CLIN NEUROPHYSIOL, V114, P1379, DOI 10.1016/S1388-2457(03)00120-2
Dumas R, 2007, J BIOMECH, V40, P543, DOI 10.1016/j.jbiomech.2006.02.013
Farkhatdinov Ildar, 2011, 2011 11th IEEE-RAS International Conference on
Humanoid Robots (Humanoids 2011), P147, DOI 10.1109/Humanoids.2011.6100859
Farkhatdinov I, 2013, ROMANSY 19 ROBOT DES, Vvolume 544, P359
Fusco N, 2008, GAIT POSTURE, V28, P663, DOI 10.1016/j.gaitpost.2008.04.016
Goswami A, 1997, AUTON ROBOT, V4, P273, DOI 10.1023/A:1008844026298
Hayot C, 2013, MOVEMENT SPORT SCI M, V4, P99
Hicheur H, 2005, NEUROSCI LETT, V383, P87, DOI 10.1016/j.neulet.2005.03.046
Hobbelen DGE, 2007, IEEE T ROBOT, V23, P1213, DOI 10.1109/TRO.2007.904908
Iida F, 2009, ROBOT AUTON SYST, V57, P139, DOI 10.1016/j.robot.2007.12.001
Imai T, 2001, EXP BRAIN RES, V136, P1, DOI 10.1007/s002210000533
Ivanenko YP, 2008, J NEUROPHYSIOL, V99, P1890, DOI 10.1152/jn.01308.2007
Jiang W, 1996, J NEUROPHYSIOL, V76, P849
Jin HL, 2003, IEEE T NEURAL NETWOR, V14, P317, DOI 10.1109/TNN.2002.806952
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Lajoie Y, 1996, NEUROLOGY, V47, P109, DOI 10.1212/WNL.47.1.109
Latash M. L., 2008, SYNERGY
MCGEER T, 1990, INT J ROBOT RES, V9, P62, DOI 10.1177/027836499000900206
MOMBAUR K, 2010, INTERNATIONAL CONFER, V5, P394
Mombaur K, 2009, ROBOTICA, V27, P321, DOI 10.1017/S0263574708004724
Olivier AH, 2011, COMPUT ANIMAT VIRT W, V22, P421, DOI 10.1002/cav.377
POZZO T, 1990, EXP BRAIN RES, V82, P97
Pratt J, 2006, IEEE-RAS INT C HUMAN, P200, DOI 10.1109/ICHR.2006.321385
Saurel G, 2016, 2016 IEEE INT C SIM
Schwab A, 2001, P ASME DES ENG TECHN, V6, P531
Seyfarth A, 2006, LECT NOTES CONTR INF, V340, P383
Sreenivasa MN, 2009, IEEE INT C INT ROB S, P4451
Stokell R, 2011, MANUAL THER, V16, P394, DOI 10.1016/j.math.2011.01.012
TALKNER P, 1987, J STAT PHYS, V48, P231, DOI 10.1007/BF01010408
Treleaven J, 2008, MANUAL THER, V13, P2, DOI 10.1016/j.math.2007.06.003
VIVIANI P, 1975, BIOL CYBERN, V19, P19, DOI 10.1007/BF00319778
Vuillerme N, 2005, NEUROSCI LETT, V378, P135, DOI 10.1016/j.neulet.2004.12.024
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
Wieber PB, 2015, HDB ROBOTICS, P1203
Wieber PB, 2008, IEEE RSJ INT C INT R
Winter DA, 1991, BIOMECHANICS MOTOR C
Wisse M, 2007, IEEE T ROBOT, V23, P112, DOI 10.1109/TRO.2006.886843
NR 53
TC 0
Z9 0
U1 0
U2 0
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD DEC
PY 2017
VL 36
IS 13-14
SI SI
BP 1508
EP 1520
DI 10.1177/0278364917693292
PG 13
WC Robotics
SC Robotics
GA FQ9EM
UT WOS:000418665200008
DA 2018-01-22
ER

PT J
AU Ayusawa, K
Yoshida, E
AF Ayusawa, Ko
Yoshida, Eiichi
TI Motion Retargeting for Humanoid Robots Based on Simultaneous Morphing
Parameter Identification and Motion Optimization
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Human motion capturing; humanoid robot; identification; motion
retargeting; optimization
ID CAPTURE DATA; MODEL
AB This paper presents a novel method for retargeting human motions onto a humanoid
robot. The method solves the following three simultaneous problems: the geometric
parameter identification that morphs the human model to the robot model, motion
planning for a robot, and the inverse kinematics of the human motion-capture data.
Simultaneous solutions can imitate the original motion more accurately than
conventional approaches, which solve the problems sequentially. The proposed method
can reconstruct the human motion within the physical constraints imposed by robot
dynamics. A reconstruction step enables quantitative analysis of the retargeting
results through direct comparison with the original human motion. The method can
also provide the precise morphing function as well as subject-specific models,
which can handle the different body dimensions of human subjects. This new
framework is suitable for applications that require an accurate generation of
human-like motions with quantitative evaluation criteria, such as humanoid robots
that evaluate assistive devices. Experimental tests of the proposed method were
performed with humanoid robot HRP-4.
C1 [Ayusawa, Ko; Yoshida, Eiichi] CNRS AIST Joint Robot Lab, RL UMI3218, Tsukuba
3058560, Japan.
RP Ayusawa, K (reprint author), CNRS AIST Joint Robot Lab, RL UMI3218, Tsukuba
3058560, Japan.
EM k.ayusawa@aist.go.jp; e.yoshida@aist.go.jp
FU METI/AMED Robotic Devices for Nursing Care Project; JSPS [25820082,
17H00768]
FX This work was supported in part by METI/AMED Robotic Devices for Nursing
Care Project and in part by JSPS Grant-in-Aid for Young Scientists (B)
Number 25820082 and JSPS Grant-in-Aid for Scientific Research (A) Number
17H00768. (Corresponding author: Ko Ayusawa.)
CR AYUSAWA K, 2012, P IEEE RSJ INT C INT, P3447
Ayusawa K, 2016, ADV ROBOTICS, V30, P519, DOI 10.1080/01691864.2016.1145596
Ayusawa K, 2015, IEEE INT C INT ROBOT, P2774, DOI 10.1109/IROS.2015.7353758
Ayusawa K, 2014, INT J ROBOT RES, V33, P446, DOI 10.1177/0278364913495932
Ayusawa K, 2014, MECH MACH THEORY, V74, P274, DOI
10.1016/j.mechmachtheory.2013.12.015
Basoeki F, 2016, ADV ROBOTICS, V30, P1395, DOI 10.1080/01691864.2016.1216724
Dariush B, 2009, INT J HUM ROBOT, V6, P265, DOI 10.1142/S021984360900170X
Delp SL, 2007, IEEE T BIO-MED ENG, V54, P1940, DOI 10.1109/TBME.2007.901024
Fletcher R., 1987, PRACTICAL METHODS OP
Gamage SSHU, 2002, J BIOMECH, V35, P87, DOI 10.1016/S0021-9290(01)00160-9
Gleicher M., 1998, Computer Graphics. Proceedings. SIGGRAPH 98 Conference
Proceedings, P33
Hammam G.-B., 2015, INT J HUM ROBOT, V12
Imamura Y., 2011, J ROBOTICS MECHATRON, V23, P58
Ito T, 2016, IEEE-RAS INT C HUMAN, P739, DOI 10.1109/HUMANOIDS.2016.7803356
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
Kaneko Kenji, 2009, 2009 9th IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2009), P7, DOI 10.1109/ICHR.2009.5379537
Kaneko K, 2011, IEEE INT C INT ROBOT, P4400, DOI 10.1109/IROS.2011.6048074
Kirk AG, 2005, PROC CVPR IEEE, P782
Lim J, 2017, J FIELD ROBOT, V34, P802, DOI 10.1002/rob.21673
Miura Kanako, 2009, 2009 9th IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2009), P596, DOI 10.1109/ICHR.2009.5379535
Miura K., 2013, P IEEE INT C ROB AUT, P671
Moulard T., 2013, P INT S ROB, P1
Nakamura Y, 2005, IEEE T ROBOT, V21, P58, DOI 10.1109/TRO.2004.833798
Nakaoka S, 2007, INT J ROBOT RES, V26, P829, DOI 10.1177/0278364907079430
Nakaoka S, 2012, IEEE-RAS INT C HUMAN, P625, DOI 10.1109/HUMANOIDS.2012.6651585
Omer AMM, 2009, 2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS,
VOLS 1-4, P137, DOI 10.1109/ROBIO.2009.4912993
Ott Christian, 2008, 2008 8th IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2008), P399, DOI 10.1109/ICHR.2008.4755984
Pollard NS, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1390, DOI 10.1109/ROBOT.2002.1014737
Ramos Oscar E., 2011, 2011 11th IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2011), P224, DOI 10.1109/Humanoids.2011.6100829
Sugihara T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1404, DOI 10.1109/ROBOT.2002.1014740
VUKOBRATOVIC M, 1970, IEEE T BIO-MED ENG, VBM17, P25, DOI
10.1109/TBME.1970.4502681
Yamane K, 2010, 2010 10th IEEE-RAS International Conference on Humanoid Robots
(Humanoids 2010), P504, DOI 10.1109/ICHR.2010.5686312
Yoshiyasu Y, 2015, IEEE ENG MED BIO, P2442, DOI 10.1109/EMBC.2015.7318887
NR 33
TC 0
Z9 0
U1 0
U2 0
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD DEC
PY 2017
VL 33
IS 6
BP 1343
EP 1357
DI 10.1109/TRO.2017.2752711
PG 15
WC Robotics
SC Robotics
GA FP7TM
UT WOS:000417841500005
DA 2018-01-22
ER

PT J
AU Yu, SQ
Nakata, Y
Nakamura, Y
Ishiguro, H
AF Yu, Shiqi
Nakata, Yoshihiro
Nakamura, Yutaka
Ishiguro, Hiroshi
TI A design of robotic spine composed of parallelogram actuation modules
SO ARTIFICIAL LIFE AND ROBOTICS
LA English
DT Article
DE Robotic spine; Human-like; Natural posture; Humanoid; Optimization
ID POSITION CONTROL; SYSTEM
AB Humanoids are being applied into society gradually in the aspect of human-robot
interaction recently. The human spine plays an important role when performing
natural human upper body postures. However, most of the humanoids only show tense
body postures due to the limitations of their simple mechanical structures. We
investigated that the human natural spine postures can be imitated by serially
connected rigid links with a few joints by the analysis of human spine motion. In
this report, we proposed a robotic spine composed of parallelogram actuation
modules, and built a prototype. We also investigated the natural spinal postural
appearances which are realized by the prototype.
C1 [Yu, Shiqi; Nakata, Yoshihiro; Nakamura, Yutaka; Ishiguro, Hiroshi] Osaka Univ,
Grad Sch Engn Sci, Dept Syst Innovat, 1-3 Machikaneyama, Toyonaka, Osaka, Japan.
RP Yu, SQ (reprint author), Osaka Univ, Grad Sch Engn Sci, Dept Syst Innovat, 1-3
Machikaneyama, Toyonaka, Osaka, Japan.
EM yu.shiqi@irl.sys.es.osaka-u.ac.jp
FU Japan Society for the Promotion of Science [JP26730136, JP26700026]
FX This work was partially supported by Japan Society for the Promotion of
Science Grants-in-Aid for Scientific Research Grant numbers JP26730136
and JP26700026.
CR Endo Nobutsuna, 2011, J ROBOTICS MECHATRON, V23, P969
GARDNERMORSE M, 1995, J ORTHOPAED RES, V13, P802, DOI 10.1002/jor.1100130521
Kanda T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1848, DOI 10.1109/ROBOT.2002.1014810
Kozuki T, 2016, 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS (IROS 2016), P2135, DOI 10.1109/IROS.2016.7759335
LINNETT JA, 1989, P I MECH ENG B-J ENG, V203, P159, DOI
10.1243/PIME_PROC_1989_203_063_02
Nishio S, 2007, GEMINOID TELEOPERATE, P343
Noritsugu T., 1986, J FLUID CONTROL, V17, P65
Paul A. K., 1994, IEEE Transactions on Control Systems Technology, V2, P271, DOI
10.1109/87.317984
Ramalho AS, 2016, IEEE-RAS INT C HUMAN, P312, DOI 10.1109/HUMANOIDS.2016.7803294
Ryu H, 2016, IEEE ROBOT AUTOM MAG, V23, P85, DOI 10.1109/MRA.2016.2582725
vanVarseveld RB, 1997, IEEE-ASME T MECH, V2, P195, DOI 10.1109/3516.622972
NR 11
TC 0
Z9 0
U1 1
U2 1
PU SPRINGER
PI NEW YORK
PA 233 SPRING ST, NEW YORK, NY 10013 USA
SN 1433-5298
EI 1614-7456
J9 ARTIF LIFE ROBOT
JI Artif. Life Robot.
PD DEC
PY 2017
VL 22
IS 4
BP 477
EP 482
DI 10.1007/s10015-017-0383-0
PG 6
WC Robotics
SC Robotics
GA FO3FV
UT WOS:000416708500011
DA 2018-01-22
ER

PT J
AU Maeda, K
Nakata, Y
Nakamura, Y
Ishiguro, H
AF Maeda, Kenta
Nakata, Yoshihiro
Nakamura, Yutaka
Ishiguro, Hiroshi
TI Effect of the cervical structure on the operability of teleoperated
humanoid head
SO ARTIFICIAL LIFE AND ROBOTICS
LA English
DT Article
DE Cervical spine; Teleopelated robot; Multiple joints cervical structure;
Operability
AB Android robots: humanoids with realistic human appearance can serve as a
telecommunication medium (teleoperated humanoids). We have anticipated that if
teleoperated humanoids acquired with not only their human appearances, but also an
ability of to realize human-like head motions, the telecommunication and the system
operability through androids would greatly improve. To realize such human-like head
motions of teleoperated humanoids, we referred to the human cervical spine
mechanism, and accordingly aimed to verify how the cervical structure of
teleoperated humanoids would affect the operability in our remote-controlling
system. First, we developed a multiple joint cervical structure for our
teleoperated humanoid, and then, we evaluated of if the multiple joint cervical
structure realized human-like head motions and improved its system operability
through experiments from the aspects of its eyesight targeting performance and
subjective impressions. Finally, we concluded that our proposed multiple joint
cervical structure of the teleoperated humanoid enabled to increase the system
operability in comparison with the prototype system: with either a top joint
cervical structure or a bottom joint one. We anticipate that this work will
contribute to the future development and implementation of telecommunication by
humanoids.
C1 [Maeda, Kenta; Nakata, Yoshihiro; Nakamura, Yutaka; Ishiguro, Hiroshi] Osaka
Univ, Dept Engn Sci, 1-3 Machikaneyama, Toyonaka, Osaka, Japan.
RP Nakata, Y (reprint author), Osaka Univ, Dept Engn Sci, 1-3 Machikaneyama,
Toyonaka, Osaka, Japan.
EM nakata@irl.sys.es.osaka-u.ac.jp
FU Japan Society for the Promotion of Science [JP25220004, JP26700026]
FX This work was partially supported by Japan Society for the Promotion of
Science Grants-in-Aid for Scientific Research Grant numbers JP25220004
and JP26700026.
CR Beskow J, 2012, LECT NOTES COMPUTER, V7403
Fernando CL, 2012, IEEE INT C INT ROBOT, P5112, DOI 10.1109/IROS.2012.6385814
Heuring JJ, 1999, IEEE T ROBOTIC AUTOM, V15, P1095, DOI 10.1109/70.817672
Kozuki T, 2015, IEEE INT C INT ROBOT, P2768, DOI 10.1109/IROS.2015.7353757
Neumann D.A., 2002, KINESIOLOGY MUSCULOS
Reinecke J, 2016, IEEE INT CONF ROBOT, P4714, DOI 10.1109/ICRA.2016.7487672
Sakamoto D., 2007, 2007 2nd Annual Conference on Human-Robot Interaction (HRI),
P193
Tanaka K., 2015, FRONT ICT, V2, DOI [10.3389/fict.2015.00008, DOI
10.3389/FICT.2015.00008]
Yonemoto K, 1995, JPN J REHABIL MED, V32, P207
NR 9
TC 0
Z9 0
U1 0
U2 0
PU SPRINGER
PI NEW YORK
PA 233 SPRING ST, NEW YORK, NY 10013 USA
SN 1433-5298
EI 1614-7456
J9 ARTIF LIFE ROBOT
JI Artif. Life Robot.
PD DEC
PY 2017
VL 22
IS 4
BP 497
EP 502
DI 10.1007/s10015-017-0387-9
PG 6
WC Robotics
SC Robotics
GA FO3FV
UT WOS:000416708500014
DA 2018-01-22
ER

PT J
AU Li, MZ
Yuan, ZF
Wang, XF
Hasegawa, Y
AF Li, Mengze
Yuan, Zhaofan
Wang, Xufeng
Hasegawa, Yasuhisa
TI Electric stimulation and cooperative control for paraplegic patient
wearing an exoskeleton
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Electrical stimulation; Human machine interface; Rehabilitation
ID BRAIN-MACHINE INTERFACE; SPINAL-CORD-INJURY; ROBOT SUIT HAL;
REHABILITATION; EMG; FEEDBACK; SUPPORT; DESIGN; SYSTEM; BLEEX
AB This paper proposes a wearable walking control interface and an electric
stimulation pattern for a paraplegic patient wearing a powered exoskeleton. The
wearable interface allows the paraplegic patient to voluntarily control their own
stride while they walk with the aid of an exoskeleton. The electric stimulation
equipment provides the wearer with tactile feedback regarding their foot position.
For security reasons, a humanoid walking robot was used to replace the patient
during the experiments. The robot simulates a paraplegic patient that walks with
the aid of an exoskeleton. A series of robot walking experiments were implemented
to validate the feasibility of the wearable walking control interface and to
evaluate the contribution of the electric stimulation pattern. The experiments to
confirm the effectiveness of electric stimulation were implemented under three
feedback conditions: visual feedback, electric stimulation feedback, and no
feedback. The experimental results indicate that the wearable walking controller
enables an operator to voluntarily control their stride while walking. The
experimental results also indicate that the electric stimulation provides helpful
information to assist in walking at nearly the same level as visual feedback. (C)
2017 Elsevier B.V. All rights reserved.
C1 [Li, Mengze; Yuan, Zhaofan; Wang, Xufeng; Hasegawa, Yasuhisa] Nagoya Univ, Dept
Micronano Syst Engn, Grad Sch Engn, Nagoya, Aichi, Japan.
RP Li, MZ (reprint author), Nagoya Univ, Dept Micronano Syst Engn, Grad Sch Engn,
Nagoya, Aichi, Japan.
EM li@robo.mein.nagoya-u.ac.jp; hasegawa@mein.nagoya-u.ac.jp
CR Cheng A., 2012, 2012 IEEE Haptics Symposium (HAPTICS), P155, DOI
10.1109/HAPTIC.2012.6183784
Esquenazi A, 2012, AM J PHYS MED REHAB, V91, P911, DOI
10.1097/PHM.0b013e318269d9a3
Fan RE, 2008, IEEE T NEUR SYS REH, V16, P270, DOI 10.1109/TNSRE.2008.920075
Gurari Netta, 2009, 2009 IEEE International Conference on Rehabilitation
Robotics: Reaching Users & the Community (ICORR), P343, DOI
10.1109/ICORR.2009.5209508
Hasegawa Yasuhisa, 2014, 2014 World Automation Congress (WAC), P400, DOI
10.1109/WAC.2014.6935970
Hasegawa Y., 2014, 2014 INT S MICR NAN, P1
Hasegawa Y, 2015, IEEE INT C INT ROBOT, P14, DOI 10.1109/IROS.2015.7353108
Hasegawa Y, 2014, 2014 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION
(SII), P644, DOI 10.1109/SII.2014.7028114
Hasegawa Y, 2012, INT SYM MICRO-NANOM, P409, DOI 10.1109/MHS.2012.6492480
Hernandez A, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-12, P4336
Arieta AH, 2005, P ANN INT IEEE EMBS, P6919
Hortal E, 2015, NEUROCOMPUTING, V151, P116, DOI 10.1016/j.neucom.2014.09.078
KACZMAREK KA, 1991, IEEE T BIO-MED ENG, V38, P1, DOI 10.1109/10.68204
Kagawa Takahiro, 2009, RO-MAN 2009 - The 18th IEEE International Symposium on
Robot and Human Interactive Communication, P633, DOI 10.1109/ROMAN.2009.5326348
Kilicarslan A, 2013, IEEE ENG MED BIO, P5606, DOI 10.1109/EMBC.2013.6610821
Ma JX, 2015, IEEE T BIO-MED ENG, V62, P876, DOI 10.1109/TBME.2014.2369483
McMullen DP, 2014, IEEE T NEUR SYS REH, V22, P784, DOI
10.1109/TNSRE.2013.2294685
Nam Y, 2014, IEEE T BIO-MED ENG, V61, P453, DOI 10.1109/TBME.2013.2280900
Shing CY, 2003, ROBOTICA, V21, P211, DOI 10.1017/S0263574702004708
Steger R, 2006, IEEE INT CONF ROBOT, P3469, DOI 10.1109/ROBOT.2006.1642232
Strausser K. A., 2011, 2011 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2011), P4911, DOI 10.1109/IROS.2011.6048674
Strickland E., 2012, IEEE SPECTR, V49
Suzuki K, 2007, ADV ROBOTICS, V21, P1441, DOI 10.1163/156855307781746061
Tsukahara A., 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS 2011), P1737, DOI 10.1109/IROS.2011.6048498
Tsukahara A, 2015, IEEE T NEUR SYS REH, V23, P308, DOI
10.1109/TNSRE.2014.2364618
Tsukahara A, 2010, ADV ROBOTICS, V24, P1615, DOI 10.1163/016918610X512622
Veneman JF, 2007, IEEE T NEUR SYS REH, V15, P379, DOI 10.1109/TNSRE.2007.903919
Yin YH, 2012, IEEE T INF TECHNOL B, V16, P542, DOI 10.1109/TITB.2011.2178034
Zoss AB, 2006, IEEE-ASME T MECH, V11, P128, DOI 10.1109/TMECH.2006.871087
NR 29
TC 0
Z9 0
U1 1
U2 1
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD DEC
PY 2017
VL 98
BP 204
EP 212
DI 10.1016/j.robot.2017.09.009
PG 9
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA FM3CI
UT WOS:000414881500015
DA 2018-01-22
ER

PT J
AU Kumazaki, H
Warren, Z
Muramatsu, T
Yoshikawa, Y
Matsumoto, Y
Miyao, M
Nakano, M
Mizushima, S
Wakita, Y
Ishiguro, H
Mimura, M
Minabe, Y
Kikuchi, M
AF Kumazaki, Hirokazu
Warren, Zachary
Muramatsu, Taro
Yoshikawa, Yuichiro
Matsumoto, Yoshio
Miyao, Masutomo
Nakano, Mitsuko
Mizushima, Sakae
Wakita, Yujin
Ishiguro, Hiroshi
Mimura, Masaru
Minabe, Yoshio
Kikuchi, Mitsuru
TI A pilot study for robot appearance preferences among high-functioning
individuals with autism spectrum disorder: Implications for therapeutic
use
SO PLOS ONE
LA English
DT Article
ID PERVASIVE DEVELOPMENTAL DISORDERS; COLLABORATIVE PLAY; HUMANOID ROBOT;
CHILDRENS VERSION; SOCIAL ROBOTS; QUOTIENT; INTERVENTION; INTERVIEW;
KASPAR; SKILLS
AB Recent rapid technological advances have enabled robots to fulfill a variety of
human-like functions, leading researchers to propose the use of such technology for
the development and subsequent validation of interventions for individuals with
autism spectrum disorder (ASD). Although a variety of robots have been proposed as
possible therapeutic tools, the physical appearances of humanoid robots currently
used in therapy with these patients are highly varied. Very little is known about
how these varied designs are experienced by individuals with ASD. In this study, we
systematically evaluated preferences regarding robot appearance in a group of 16
individuals with ASD (ages 10-17). Our data suggest that there may be important
differences in preference for different types of robots that vary according to
interaction type for individuals with ASD. Specifically, within our pilot sample,
children with higher-levels of reported ASD symptomatology reported a preference
for specific humanoid robots to those perceived as more mechanical or mascot-like.
The findings of this pilot study suggest that preferences and reactions to robotic
interactions may vary tremendously across individuals with ASD. Future work should
evaluate how such differences may be systematically measured and potentially
harnessed to facilitate meaningful interactive and intervention paradigms.
C1 [Kumazaki, Hirokazu; Minabe, Yoshio; Kikuchi, Mitsuru] Kanazawa Univ, Dept Clin
Res Social Recognit & Memory, Res Ctr Child Mental Dev, Kanazawa, Ishikawa, Japan.
[Kumazaki, Hirokazu; Warren, Zachary] Vanderbilt Kennedy Ctr, Dept Pediat,
Nashville, TN USA.
[Kumazaki, Hirokazu; Warren, Zachary] Vanderbilt Kennedy Ctr, Dept Psychiat,
Nashville, TN USA.
[Kumazaki, Hirokazu; Warren, Zachary] Vanderbilt Kennedy Ctr, Dept Special Educ,
Nashville, TN USA.
[Kumazaki, Hirokazu; Muramatsu, Taro; Mimura, Masaru] Keio Univ, Sch Med, Dept
Neuropsychiat, Tokyo, Japan.
[Yoshikawa, Yuichiro; Ishiguro, Hiroshi] Osaka Univ, Dept Syst Innovat, Grad Sch
Engn Sci, Osaka, Japan.
[Yoshikawa, Yuichiro; Ishiguro, Hiroshi] JST ERATO ISHIGURO Symbiot Human Robot
Interact, Osaka, Japan.
[Matsumoto, Yoshio; Wakita, Yujin] Natl Inst Adv Ind Sci & Technol, Serv Robot
Res Grp, Intelligent Syst Inst, Tsukuba, Japan.
[Miyao, Masutomo; Nakano, Mitsuko; Mizushima, Sakae] Natl Ctr Child Hlth & Dev,
Dept Psychosocial Med, Tokyo, Japan.
RP Kumazaki, H (reprint author), Kanazawa Univ, Dept Clin Res Social Recognit &
Memory, Res Ctr Child Mental Dev, Kanazawa, Ishikawa, Japan.; Kumazaki, H (reprint
author), Vanderbilt Kennedy Ctr, Dept Pediat, Nashville, TN USA.; Kumazaki, H
(reprint author), Vanderbilt Kennedy Ctr, Dept Psychiat, Nashville, TN USA.;
Kumazaki, H (reprint author), Vanderbilt Kennedy Ctr, Dept Special Educ, Nashville,
TN USA.; Kumazaki, H (reprint author), Keio Univ, Sch Med, Dept Neuropsychiat,
Tokyo, Japan.
EM kumazaki@tiara.ocn.ne.jp
OI Kumazaki, Hirokazu/0000-0002-2567-0276
FU Japan Society for the Promotion of Science [17H05857]; Center of
Innovation Program from the Japan Science and Technology Agency, JST,
Japan; JST ERATO ISHIGURO
FX This work was supported in part by Grants-in-Aid for Scientific Research
from the Japan Society for the Promotion of Science (17H05857), and was
partially supported by The Center of Innovation Program from the Japan
Science and Technology Agency, JST, Japan.; We sincerely thank the
participants and all the families who participated in this study. This
work was supported in part by Grants-in-Aid for Scientific Research from
the Japan Society for the Promotion of Science (17H05857), and the JST
ERATO ISHIGURO Symbiotic Human-Robot Interaction Project, and was
partially supported by The Center of Innovation Program from the Japan
Science and Technology Agency, JST, Japan.
CR American Psychiatric Publishing, 2013, DIAGN STAT MAN MENT
Auyeung B, 2008, J AUTISM DEV DISORD, V38, P1230, DOI 10.1007/s10803-007-0504-z
Barakova EI, 2013, ROBOT AUTON SYST, V61, P704, DOI 10.1016/j.robot.2012.08.001
Baron-Cohen S, 2006, J AUTISM DEV DISORD, V36, P343, DOI 10.1007/s10803-006-
0073-6
Baron-Cohen S, 2006, PROG NEURO-PSYCHOPH, V30, P865, DOI
10.1016/j.pnpbp.2006.01.010
Bartneck C, 2008, 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, VOLS 1 AND 2, P553, DOI 10.1109/ROMAN.2008.4600724
Bekele E, 2014, AUTISM, V18, P598, DOI 10.1177/1362361313479454
Costa S, 2015, INT J SOC ROBOT, V7, P265, DOI 10.1007/s12369-014-0250-2
Costescu CA, 2015, J AUTISM DEV DISORD, V45, P3715, DOI 10.1007/s10803-014-2319-
z
Diehl JJ, 2012, RES AUTISM SPECT DIS, V6, P249, DOI 10.1016/j.rasd.2011.05.006
Goodwin MS, 2008, FOCUS AUTISM DEV DIS, V23, P125, DOI 10.1177/1088357608316678
Huskens B, 2015, J AUTISM DEV DISORD, V45, P3746, DOI 10.1007/s10803-014-2326-0
Huskens B, 2013, DEV NEUROREHABIL, V16, P345, DOI 10.3109/17518423.2012.739212
Iacono I, 2011, ROBOTS SOCIAL MEDIAT, P1
Ishiguro H, 2011, INT J HUM ROBOT, V8, P391, DOI 10.1142/S0219843611002514
Ito H, 2012, RES AUTISM SPECT DIS, V6, P1265, DOI 10.1016/j.rasd.2012.04.002
Kaboski JR, 2015, J AUTISM DEV DISORD, V45, P3862, DOI 10.1007/s10803-014-2153-3
Lehmann V, 2013, BEHAV PROCESS, V94, P99, DOI 10.1016/j.beproc.2013.01.001
LORD C, 1994, J AUTISM DEV DISORD, V24, P659, DOI 10.1007/BF02172145
PARS Committee, 2008, PERV DEV DIS AUT SOC
Peca A, 2014, COMPUT HUM BEHAV, V41, P268, DOI 10.1016/j.chb.2014.09.035
Pioggia G., 2008, J CYBERTHERAPY REHAB, V1, P49
Ricks DJ, 2010, IEEE INT CONF ROBOT, P4354, DOI 10.1109/ROBOT.2010.5509327
Robins B., 2009, 2 INT C ADV COMP HUM, P205, DOI DOI 10.1109/ACHI.2009.32
Robins B, 2006, INTERACT STUD, V7, P479
Scassellati B, 2007, SPR TRA ADV ROBOT, V28, P552
Siegel M, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, P2563, DOI 10.1109/IROS.2009.5354116
Tanaka M, 2012, MED SCI MONITOR, V18, pCR550, DOI 10.12659/MSM.883350
Trevarthen C, 2013, FRONT INTEGR NEUROSC, V7, DOI 10.3389/fnint.2013.00049
van der Meer LAJ, 2010, DEV NEUROREHABIL, V13, P294, DOI
10.3109/17518421003671494
Wainer J, 2014, IEEE T AUTON MENT DE, V6, P183, DOI 10.1109/TAMD.2014.2303116
Wainer J, 2014, INT J SOC ROBOT, V6, P45, DOI 10.1007/s12369-013-0195-x
Wakabayashi Akio, 2004, Shinrigaku Kenkyu, V75, P78
Wakabayashi A, 2007, J AUTISM DEV DISORD, V37, P491, DOI 10.1007/s10803-006-
0181-3
Warren ZE, 2015, J AUTISM DEV DISORD, V45, P3726, DOI 10.1007/s10803-013-1918-4
Wheelwright S, 2010, MOL AUTISM, V1, DOI 10.1186/2040-2392-1-10
Yoshikawa M, 2011, DEV ANDROID ROBOT PS, P2378
[Anonymous], 2009, AUTISM SPECTRUM DISO, V54, P201
NR 38
TC 0
Z9 0
U1 2
U2 2
PU PUBLIC LIBRARY SCIENCE
PI SAN FRANCISCO
PA 1160 BATTERY STREET, STE 100, SAN FRANCISCO, CA 94111 USA
SN 1932-6203
J9 PLOS ONE
JI PLoS One
PD OCT 13
PY 2017
VL 12
IS 10
AR e0186581
DI 10.1371/journal.pone.0186581
PG 13
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA FJ7ZQ
UT WOS:000412980300044
PM 29028837
OA gold
DA 2018-01-22
ER

PT J
AU Tokuda, IT
Hoang, H
Kawato, M
AF Tokuda, Isao T.
Hoang, Huu
Kawato, Mitsuo
TI New insights into olivo-cerebellar circuits for learning from a small
training sample
SO CURRENT OPINION IN NEUROBIOLOGY
LA English
DT Article
ID INFERIOR OLIVE NEURONS; COMPLEX SPIKE SYNCHRONY; CEREBELLAR
PURKINJE-CELLS; CLIMBING FIBER INPUT; MOTOR CONTROL; IN-VIVO;
COMPUTER-SIMULATION; NEURAL OSCILLATORS; CREDIT ASSIGNMENT; CANONIC
MODELS
AB Artificial intelligence such as deep neural networks exhibited remarkable
performance in simulated video games and 'Go'. In contrast, most humanoid robots in
the DARPA Robotics Challenge fell down to ground. The dramatic contrast in
performance is mainly due to differences in the amount of training data, which is
huge and small, respectively. Animals are not allowed with millions of the failed
trials, which lead to injury and death. Humans fall only several thousand times
before they balance and walk. We hypothesize that a unique closed-loop neural
circuit formed by the Purkinje cells, the cerebellar deep nucleus and the inferior
olive in and around the cerebellum and the highest density of gap junctions, which
regulate synchronous activities of the inferior olive nucleus, are computational
machinery for learning from a small sample. We discuss recent experimental and
computational advances associated with this hypothesis.
C1 [Tokuda, Isao T.] Ritsumeikan Univ, Dept Mech Engn, Kusatsu, Shiga, Japan.
[Hoang, Huu; Kawato, Mitsuo] Adv Telecommun Res Inst Int, ATR Brain Informat
Commun Res Lab Grp, Hikaridai, Kyoto, Japan.
RP Kawato, M (reprint author), Adv Telecommun Res Inst Int, ATR Brain Informat
Commun Res Lab Grp, Hikaridai, Kyoto, Japan.
EM kawato@atr.jp
FU National Institute of Information and Communications Technology entitled
'Development of Network Dynamics Modeling Methods for Human Brain Data
Simulation Systems'; 'Development of BMI Technologies for Clinical
Application' of the Strategic Research Program for Brain Sciences; Japan
Agency for Medical Research and Development (AMED); Japan Society for
the Promotion of Science (JSPS) [16K00343, 16K06154, 26286086, 17H06313]
FX This work was supported by a contract with the National Institute of
Information and Communications Technology entitled 'Development of
Network Dynamics Modeling Methods for Human Brain Data Simulation
Systems', and by 'Development of BMI Technologies for Clinical
Application' of the Strategic Research Program for Brain Sciences
supported by Japan Agency for Medical Research and Development (AMED),
and also by Grant-in-Aid for Scientific Research (No. 16K00343, No.
16K06154, No. 26286086, No. 17H06313) from Japan Society for the
Promotion of Science (JSPS).
CR Achard P, 2006, PLOS COMPUT BIOL, V2, P794, DOI 10.1371/journal.pcbi.0020094
Adolph KE, 2012, PSYCHOL SCI, V23, P1387, DOI 10.1177/0956797612446346
ALBUS J S, 1971, Mathematical Biosciences, V10, P25, DOI 10.1016/0025-
5564(71)90051-4
Badura A, 2016, SCI REP-UK, V6, DOI 10.1038/srep36131
Badura A, 2013, NEURON, V78, P700, DOI 10.1016/j.neuron.2013.03.018
Bastian AJ, 2011, CURR OPIN NEUROBIOL, V21, P596, DOI 10.1016/j.conb.2011.06.007
Best AR, 2009, NEURON, V62, P555, DOI 10.1016/j.neuron.2009.04.018
Blenkinsop TA, 2006, J NEUROSCI, V26, P1739, DOI 10.1523/JNEUROSCI.3677-05.2006
Boyden ES, 2006, NEURON, V51, P823, DOI 10.1016/j.neuron.2006.08.026
Brooks JX, 2015, NAT NEUROSCI, V18, P1310, DOI 10.1038/nn.4077
Clopath C, 2014, J NEUROSCI, V34, P7203, DOI 10.1523/JNEUROSCI.2791-13.2014
Cortese A, 2016, NAT COMMUN, V7, DOI 10.1038/ncomms13669
De Gruijl JR, 2014, J NEUROSCI, V34, P8937, DOI 10.1523/JNEUROSCI.5064-13.2014
De Zeeuw CI, 2011, NAT REV NEUROSCI, V12, P327, DOI 10.1038/nrn3011
De Zeeuw CI, 1998, TRENDS NEUROSCI, V21, P391, DOI 10.1016/S0166-2236(98)01310-1
Dean P, 2010, NAT REV NEUROSCI, V11, P30, DOI 10.1038/nrn2756
De Zeeuw CI, 2015, COLD SPRING HARB PER, P7
EHRENFEUCHT A, 1989, INFORM COMPUT, V82, P247, DOI 10.1016/0890-5401(89)90002-3
Fairhurst D, 2010, MATH MODEL NAT PHENO, V5, P146, DOI 10.1051/mmnp/20105206
Franklin DW, 2008, J NEUROSCI, V28, P11165, DOI 10.1523/JNEUROSCI.3099-08.2008
Garwicz M, 2009, P NATL ACAD SCI USA, V106, P21889, DOI 10.1073/pnas.0905777106
GEMAN S, 1992, NEURAL COMPUT, V4, P1, DOI 10.1162/neco.1992.4.1.1
GILBERT PFC, 1977, BRAIN RES, V128, P309, DOI 10.1016/0006-8993(77)90997-0
Grewe BF, 2010, NAT METHODS, V7, P399, DOI 10.1038/nmeth.1453
Haruno M, 2001, NEURAL COMPUT, V13, P2201, DOI 10.1162/089976601750541778
HAUSSLER D, 1992, INFORM COMPUT, V100, P78, DOI 10.1016/0890-5401(92)90010-D
Hesslow G, 1996, EXP BRAIN RES, V110, P36
Hirata Y, 2008, EUR PHYS J-SPEC TOP, V164, P13, DOI 10.1140/epjst/e2008-00830-8
Hoang H, 2015, FRONT COMPUT NEUROSC, V9, DOI 10.3389/fncom.2015.00056
Hoogland TM, 2015, CURR BIOL, V25, P1157, DOI 10.1016/j.cub.2015.03.009
ITO M, 1982, J PHYSIOL-LONDON, V324, P113, DOI 10.1113/jphysiol.1982.sp014103
Ito M, 2013, FRONT NEURAL CIRCUIT, V7, DOI 10.3389/fncir.2013.00001
Jaeger D, 2013, NEURAL NETWORKS, V47, P1, DOI 10.1016/j.neunet.2013.08.003
Jorntell H, 2016, CEREBELLUM, V15, P104, DOI 10.1007/s12311-014-0623-y
Katori Y, 2010, INT J BIFURCAT CHAOS, V20, P583, DOI 10.1142/S0218127410025909
Kawamura Y, 2013, NAT COMMUN, V4, DOI 10.1038/ncomms3732
KAWATO M, 1987, BIOL CYBERN, V57, P169, DOI 10.1007/BF00364149
Kawato M, 1999, CURR OPIN NEUROBIOL, V9, P718, DOI 10.1016/S0959-4388(99)00028-8
KAWATO M, 1992, BIOL CYBERN, V68, P95, DOI 10.1007/BF00201431
Kawato M, 2007, CURR OPIN NEUROBIOL, V17, P205, DOI 10.1016/j.conb.2007.03.004
Kawato M, 2011, CURR OPIN NEUROBIOL, V21, P791, DOI 10.1016/j.conb.2011.05.014
Ke MC, 2009, NAT NEUROSCI, V12, P1171, DOI 10.1038/nn.2366
Kitamura K, 2011, J NEUROSCI, V31, P10847, DOI 10.1523/JNEUROSCI.2525-10.2011
Kitazawa S, 1998, NATURE, V392, P494, DOI 10.1038/33141
Lake BM, 2015, SCIENCE, V350, P1332, DOI 10.1126/science.aab3050
Lang EJ, 2002, J NEUROPHYSIOL, V87, P1993, DOI 10.1152/jn.00477.2001
Lang EJ, 2016, CEREBELLUM
Lang EJ, 2014, FRONT SYST NEUROSCI, V8
LeCun Y, 2015, NATURE, V521, P436, DOI 10.1038/nature14539
Lefler Y, 2014, NEURON, V81, P1389, DOI 10.1016/j.neuron.2014.02.032
Lefler Y, 2013, FRONT SYST NEUROSCI, V7, DOI 10.3389/fnsys.2013.00022
Leong YC, 2017, NEURON, V93, P451, DOI 10.1016/j.neuron.2016.12.040
Levine S, 2016, ARXIV160302199
Leznik E, 2005, J NEUROPHYSIOL, V94, P2447, DOI 10.1152/jn.00353.2005
Leznik E, 2002, J NEUROSCI, V22, P2804
LLINAS R, 1974, J NEUROPHYSIOL, V37, P560
LLINAS R, 1981, J PHYSIOL-LONDON, V315, P549, DOI 10.1113/jphysiol.1981.sp013763
Luque NR, 2016, FRONT COMPUT NEUROSC, V10, DOI 10.3389/fncom.2016.00017
Mack ML, 2016, P NATL ACAD SCI USA, V113, P13203, DOI 10.1073/pnas.1614048113
Makarenko V, 1998, P NATL ACAD SCI USA, V95, P15747, DOI
10.1073/pnas.95.26.15747
Manor Y, 1997, J NEUROPHYSIOL, V77, P2736
MARR D, 1969, J PHYSIOL-LONDON, V202, P437, DOI 10.1113/jphysiol.1969.sp008820
Medina JF, 2000, NAT NEUROSCI, V3, P1205, DOI 10.1038/81486
Medina JF, 2000, CURR OPIN NEUROBIOL, V10, P717, DOI 10.1016/S0959-
4388(00)00154-9
Medina JF, 2000, J NEUROSCI, V20, P5516
Meng L, 2011, J NEURAL ENG, V8, DOI 10.1088/1741-2560/8/6/065006
Mnih V, 2015, NATURE, V518, P529, DOI 10.1038/nature14236
Morimoto J, 1999, ADV ROBOTICS, V13, P267, DOI 10.1163/156855399X01314
Nguyen-Vu TDB, 2013, NAT NEUROSCI, V16, P1734, DOI 10.1038/nn.3576
Nobukawa S, 2016, NEURAL COMPUT, V28, P2505, DOI 10.1162/NECO_a_00894
Medina JF, 2015, NAT NEUROSCI, V29, P1
Onizuka M, 2013, NEURAL NETWORKS, V47, P51, DOI 10.1016/j.neunet.2013.01.006
Pnevmatikakis EA, 2016, NEURON, V89, P285, DOI 10.1016/j.neuron.2015.11.037
Popa LS, 2016, CEREBELLUM, V15, P93, DOI 10.1007/s12311-015-0685-5
RAUDYS SJ, 1991, IEEE T PATTERN ANAL, V13, P252, DOI 10.1109/34.75512
Rossert C, 2015, PLOS COMPUT BIOL, V11, DOI 10.1371/journal.pcbi.1004515
Samejima K, 2003, NEURAL NETWORKS, V16, P985, DOI 10.1016/S0893-6080(02)00235-6
Schweighofer N, 2004, P NATL ACAD SCI USA, V101, P4655, DOI
10.1073/pnas.0305966101
Schweighofer N, 1999, J NEUROPHYSIOL, V82, P804
Schweighofer N, 2013, FRONT NEURAL CIRCUIT, V7, DOI 10.3389/fncir.2013.00094
Silver D, 2016, NATURE, V529, P484, DOI 10.1038/nature16961
SOTELO C, 1974, J NEUROPHYSIOL, V37, P541
Stosiek C, 2003, P NATL ACAD SCI USA, V100, P7319, DOI 10.1073/pnas.1232232100
Sugimoto N, 2012, NEURAL NETWORKS, V29-30, P8, DOI 10.1016/j.neunet.2012.01.002
Tang TY, 2016, CEREBELLUM, V15, P10, DOI 10.1007/s12311-015-0743-z
Toda A, 2011, NEUROIMAGE, V54, P892, DOI 10.1016/j.neuroimage.2010.09.057
Tokuda IT, 2013, NEURAL NETWORKS, V47, P42, DOI 10.1016/j.neunet.2012.12.006
Tokuda IT, 2010, NEURAL NETWORKS, V23, P836, DOI 10.1016/j.neunet.2010.04.006
Torben-Nielsen B, 2012, PLOS COMPUT BIOL, V8, DOI 10.1371/journal.pcbi.1002580
Tsunoda T, 2010, J PHYS SOC JPN, V79, DOI 10.1143/JPSJ.79.124801
Tsutsumi S, 2015, J NEUROSCI, V35, P843, DOI 10.1523/JNEUROSCI.2170-14.2015
Turecek J, 2014, NEURON, V81, P1375, DOI 10.1016/j.neuron.2014.01.024
Tyukin I, 2010, INT J NEURAL SYST, V20, P193, DOI 10.1142/S0129065710002358
Van Der Giessen RS, 2008, NEURON, V58, P599, DOI 10.1016/j.neuron.2008.03.016
Vanier MC, 1999, J COMPUT NEUROSCI, V7, P149, DOI 10.1023/A:1008972005316
Velarde MG, 2002, NEURAL NETWORKS, V15, P5, DOI 10.1016/S0893-6080(01)00130-7
WELSH JP, 1995, NATURE, V374, P453, DOI 10.1038/374453a0
Wilson ED, 2015, FRONT NEUROROBOTICS, P9
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
Yamashita O, 2008, NEUROIMAGE, V42, P1414, DOI 10.1016/j.neuroimage.2008.05.050
Yang Y, 2014, NATURE, V510, P529, DOI 10.1038/nature13282
Zhou H, 2014, ELIFE, V2014
NR 102
TC 0
Z9 0
U1 3
U2 3
PU CURRENT BIOLOGY LTD
PI LONDON
PA 84 THEOBALDS RD, LONDON WC1X 8RR, ENGLAND
SN 0959-4388
EI 1873-6882
J9 CURR OPIN NEUROBIOL
JI Curr. Opin. Neurobiol.
PD OCT
PY 2017
VL 46
BP 58
EP 67
DI 10.1016/j.conb.2017.07.010
PG 10
WC Neurosciences
SC Neurosciences & Neurology
GA FN7KF
UT WOS:000416196400009
PM 28841437
DA 2018-01-22
ER

PT J
AU Kojima, K
Ishiguro, Y
Sugai, F
Nozawa, S
Kakiuchi, Y
Okada, K
Inaba, M
AF Kojima, Kunio
Ishiguro, Yasuhiro
Sugai, Fumihito
Nozawa, Shunichi
Kakiuchi, Yohei
Okada, Kei
Inaba, Masayuki
TI Rotational Sliding Motion Generation for Humanoid Robot by Force
Distribution in Each Contact Face
SO IEEE ROBOTICS AND AUTOMATION LETTERS
LA English
DT Article
DE Contact modelling; humanoid and bipedal locomotion; humanoid robots
AB Recent studies have explored humanoid robots in contact with the environment in
various ways. However, many of them assumed static rather than sliding contacts.
Studies on humanoid shuffle motion planning have realized sliding motions, such as
turning, but relied on quasi-static balance control. In this letter, we propose a
dynamic balance control method for sliding contact motions. The proposed method
consists of the distributed force contact constraint (D.F.C.C.), which describes
rotational sliding contact constraints, and the slide friction control (S.F.C.),
which controls humanoid dynamic balance based on the model predictive control by
using the D.F.C.C. The D.F.C.C. segments a contact face into a grid of contact
points and optimize the vertical component of the contact forces. This enables us
to calculate the sliding friction forces at each contact point. The S.F.C. is the
model predictive control for distributing contact forces to each contact face
considering sliding frictional dynamics. The D.F.C.C. is simple and easy to apply
to the S.F.C. In our online stabilizer, we control not only a ZMP, but also contact
forces for realizing the contact force distributions planned in the S.F.C. Finally,
we show our method's validity through the experiment using life-sized humanoid
robot JAXON.
C1 [Kojima, Kunio] Univ Tokyo, Grad Sch Interdisciplinary Informat Studies, Dept
Interdisciplinary Informat Studies, Tokyo 1138656, Japan.
[Ishiguro, Yasuhiro; Sugai, Fumihito; Nozawa, Shunichi; Kakiuchi, Yohei; Okada,
Kei; Inaba, Masayuki] Univ Tokyo, Grad Sch Informat Sci & Technol, Dept
Mechanoinformat, Tokyo 1138656, Japan.
RP Kojima, K (reprint author), Univ Tokyo, Grad Sch Interdisciplinary Informat
Studies, Dept Interdisciplinary Informat Studies, Tokyo 1138656, Japan.
EM k-kojima@jsk.imi.i.u-tokyo.ac.jp; ishiguro@jsk.imi.i.u-tokyo.ac.jp;
sugai@jsk.imi.i.u-tokyo.ac.jp; nozawa@jsk.t.u-tokyo.ac.jp;
youhei@jsk.imi.i.u-tokyo.ac.jp; k-okada@jsk.t.u-tokyo.ac.jp;
inaba@i.u-tokyo.ac.jp
CR Audren H, 2014, IEEE INT C INT ROBOT, P4030, DOI 10.1109/IROS.2014.6943129
Caron S, 2015, IEEE INT CONF ROBOT, P5107, DOI 10.1109/ICRA.2015.7139910
Englsberger J, 2011, IEEE INT C INT ROBOT, P4420, DOI 10.1109/IROS.2011.6048045
Hashimoto K, 2011, IEEE INT CONF ROBOT, P2041
Herzog A, 2015, IEEE-RAS INT C HUMAN, P874, DOI 10.1109/HUMANOIDS.2015.7363464
Jin Tak Kim, 2011, 2011 11th IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2011), P339, DOI 10.1109/Humanoids.2011.6100839
Kajita S, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1644
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kajita S, 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS 2010), P4489, DOI 10.1109/IROS.2010.5651082
Koeda Masanao, 2011, 2011 IEEE International Conference on Robotics and
Automation, P593
Kojima K., 2015, P 2015 IEEE RAS INT, P373
Kojima K, 2015, IEEE INT C INT ROBOT, P2187, DOI 10.1109/IROS.2015.7353670
Miura K, 2013, IEEE T ROBOT, V29, P875, DOI 10.1109/TRO.2013.2257574
Nagasaka K., 2012, P ROB S, V17, P134
Nishiwaki K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2684, DOI 10.1109/IRDS.2002.1041675
Nozawa Shunichi, 2017, 2017 IEEE International Conference on Robotics and
Automation (ICRA), P1291, DOI 10.1109/ICRA.2017.7989152
Posa M, 2014, INT J ROBOT RES, V33, P69, DOI 10.1177/0278364913506757
Righetti L, 2013, INT J ROBOT RES, V32, P280, DOI 10.1177/0278364912469821
Wakisaka N., 2016, P 21 ROB S, V3, P344
Zhou JJ, 2016, IEEE INT CONF ROBOT, P372, DOI 10.1109/ICRA.2016.7487155
NR 20
TC 0
Z9 0
U1 0
U2 0
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 2377-3766
J9 IEEE ROBOT AUTOM LET
JI IEEE Robot. Autom. Lett.
PD OCT
PY 2017
VL 2
IS 4
BP 2072
EP 2079
DI 10.1109/LRA.2017.2719765
PG 8
WC Robotics
SC Robotics
GA FK8EY
UT WOS:000413741800032
DA 2018-01-22
ER

PT J
AU Kawaharazuka, K
Kawamura, M
Makino, S
Asano, Y
Okada, K
Inaba, M
AF Kawaharazuka, Kento
Kawamura, Masaya
Makino, Shogo
Asano, Yuki
Okada, Kei
Inaba, Masayuki
TI Antagonist Inhibition Control in Redundant Tendon-Driven Structures
Based on Human Reciprocal Innervation for Wide Range Limb Motion of
Musculoskeletal Humanoids
SO IEEE ROBOTICS AND AUTOMATION LETTERS
LA English
DT Article
DE Biomimetics; humanoid robots; tendon/wire mechanism
AB The body structure of an anatomically correct tendon-driven musculoskeletal
humanoid is complex, and the difference between its geometric model and the actual
robot is very large because expressing the complex routes of tendon wires in a
geometric model is very difficult. If we move a tendon-driven musculoskeletal
humanoid by the tendon wire lengths of the geometric model, unintended muscle
tension and slack will emerge. In some cases, this can lead to the wreckage of the
actual robot. To solve this problem, we focused on reciprocal innervation in the
human nervous system, and then implemented antagonist inhibition control (AIC)-
based on the reflex. This control makes it possible to avoid unnecessary internal
muscle tension and slack of tendon wires caused by model error, and to perform wide
range motion safely for a long time. To verify its effectiveness, we applied AIC to
the upper limb of the tendon-driven musculoskeletal humanoid, Kengoro, and
succeeded in dangling for 14 min and doing pull-ups.
C1 [Kawaharazuka, Kento; Kawamura, Masaya; Makino, Shogo; Asano, Yuki; Okada, Kei;
Inaba, Masayuki] Univ Tokyo, Dept Mechanoinformat, Tokyo 1138656, Japan.
RP Kawaharazuka, K (reprint author), Univ Tokyo, Dept Mechanoinformat, Tokyo
1138656, Japan.
EM kawaharazuka@jsk.imi.i.u-tokyo.ac.jp; kawamura@jsk.imi.i.u-tokyo.ac.jp;
makino@jsk.imi.i.u-tokyo.ac.jp; asano@jsk.imi.i.u-tokyo.ac.jp;
k-okada@jsk.imi.i.u-tokyo.ac.jp; inaba@jsk.imi.i.u-tokyo.ac.jp
CR Asano Y, 2016, IEEE-RAS INT C HUMAN, P876, DOI 10.1109/HUMANOIDS.2016.7803376
Asano Y, 2015, IEEE INT C INT ROBOT, P5960, DOI 10.1109/IROS.2015.7354225
Asano Y, 2013, IEEE-RAS INT C HUMAN, P336, DOI 10.1109/HUMANOIDS.2013.7029996
Jacobsen SC, 1990, IEEE CONTR SYST MAG, V10, P23, DOI 10.1109/37.45790
Jantsch M, 2013, IEEE-RAS INT C HUMAN, P342, DOI 10.1109/HUMANOIDS.2013.7029997
Jantsch M., 2011, 2011 IEEE International Conference on Robotics and Biomimetics
(ROBIO), P2211, DOI 10.1109/ROBIO.2011.6181620
Kawamura M, 2016, IEEE-RAS INT C HUMAN, P814, DOI 10.1109/HUMANOIDS.2016.7803367
Ookubo S, 2015, IEEE-RAS INT C HUMAN, P765, DOI 10.1109/HUMANOIDS.2015.7363456
Potkonjak V, 2011, INT J ADV ROBOT SYST, V8, P143
Shirai T., 2011, 2011 IEEE International Conference on Robotics and Biomimetics
(ROBIO), P2229, DOI 10.1109/ROBIO.2011.6181623
NR 10
TC 0
Z9 0
U1 1
U2 1
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 2377-3766
J9 IEEE ROBOT AUTOM LET
JI IEEE Robot. Autom. Lett.
PD OCT
PY 2017
VL 2
IS 4
BP 2119
EP 2126
DI 10.1109/LRA.2017.2720854
PG 8
WC Robotics
SC Robotics
GA FK8EY
UT WOS:000413741800038
DA 2018-01-22
ER

PT J
AU Sugimura, S
Hoshino, K
AF Sugimura, Sota
Hoshino, Kiyoshi
TI Wearable Hand Pose Estimation for Remote Control of a Robot on the Moon
SO JOURNAL OF ROBOTICS AND MECHATRONICS
LA English
DT Article
DE hand pose estimation; wearable; miniature RGB camera; remote control of
a robot
AB In recent years, a plan has been developed to conduct an investigation of an
unknown environment by remotely controlling an exploration robot system sent to the
Moon. For the robot to successfully perform sophisticated tasks, it must implement
multiple degrees of freedom in not only its arms and legs but also its hands and
fingers. Moreover, the robot has to be a humanoid type so that it can use tools
used by astronauts, with no modification. On the other hand, to manipulate the
multiple-degrees-of-freedom robot without learning skills necessary for
manipulation and to minimize the psychological burden on operators, employing a
method that utilizes everyday movements of operators as input to the robot, rather
than a special controller, is ideal. In this paper, the authors propose a compact
wearable device that allows for the estimation of the hand pose (hand motion
capture) of the subject. The device has a miniature wireless RGB camera that is
placed on the back of the user's hand, rather than on the palm. Attachment of the
small camera to the back of the hand may make it possible to minimize the restraint
on the subject's motions during motion capture. In the conventional techniques, the
camera is attached on the palm because images of fingertips always need to be
captured by the camera before images of any other part of the hand can be captured.
In contrast, the image processing algorithm proposed herein is capable of
estimating the hand pose with no need for first capturing the fingertips.
C1 [Sugimura, Sota; Hoshino, Kiyoshi] Univ Tsukuba, Grad Sch Syst & Informat Engn,
1-1-1 Tennodai, Tsukuba, Ibaraki 3058573, Japan.
RP Sugimura, S (reprint author), Univ Tsukuba, Grad Sch Syst & Informat Engn, 1-1-1
Tennodai, Tsukuba, Ibaraki 3058573, Japan.
EM s1620795@u.tsukuba.ac.jp; hoshino@esys.tsukuba.ac.jp
CR Bejczy A. K., 1990, Proceedings 1990 IEEE International Conference on Robotics
and Automation (Cat. No.90CH2876-1), P546, DOI 10.1109/ROBOT.1990.126037
Fernandes L. A. F., 2008, REV INFORM TEORICA A, V15, P75
Fukui Rui, 2011, UBICOMP, P311
Haruyama J., 2016, AEROSPACE TECHNOLOGY, V14, P147
Haruyama J, 2009, GEOPHYS RES LETT, V36, DOI 10.1029/2009GL040635
Hoshino K, 2006, IEICE T INF SYST, VE89D, P1820, DOI 10.1093/ietisy/e89-d.6.1820
Hoshino K., 2015, HDB DIGITAL GAMES EN, P1
Hoshino K., 2012, J ROBOTICS MECHATRON, V24, P180
Hoshino K., 2017, INT J AUTOMATION TEC, V11, P433
Kawano I., 2016, P 60 SPAC SCI TECHN
Kim D., 2012, UIST 12 P 25 ANN ACM, V176
Otsu N., 1998, P IAPR WORKSH COMP V, P431
Tarn T. J., 1997, IFAC P, V30, P599
Tomida M., 2015, J ROBOTICS MECHATRON, V27, P167
NR 14
TC 0
Z9 0
U1 1
U2 1
PU FUJI TECHNOLOGY PRESS LTD
PI TOKYO
PA 4F TORANOMON SANGYO BLDG, 2-29, TORANOMON 1-CHOME, MINATO-KU, TOKYO,
105-0001, JAPAN
SN 0915-3942
EI 1883-8049
J9 J ROBOT MECHATRON
JI J. Robot. Mechatron.
PD OCT
PY 2017
VL 29
IS 5
SI SI
BP 829
EP 837
DI 10.20965/jrm.2017.p0829

PG 9
WC Robotics
SC Robotics
GA FK3QA
UT WOS:000413399700007
DA 2018-01-22
ER

PT J
AU Chen, XC
Yu, ZG
Zhang, WM
Zheng, Y
Huang, Q
Ming, AG
AF Chen, Xuechao
Yu, Zhangguo
Zhang, Weimin
Zheng, Yu
Huang, Qiang
Ming, Aiguo
TI Bioinspired Control of Walking With Toe-Off, Heel-Strike, and
Disturbance Rejection for a Biped Robot
SO IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
LA English
DT Article
DE Acceleration optimization; biped robot; bioinspired control; motion
analysis
ID PATTERN GENERATION; HUMANOID ROBOT; GAIT; DYNAMICS; MOTION; MODEL
AB Human-like features, like toe-off, heel-strike, and disturbance rejection, can
enhance the performance of bipedal robots. However, the required control strategies
for these motions influence each other, and few studies have considered them
simultaneously. Humans can walk stably with toe-off and heel-strike even after
experiencing disturbances. Thus, we can study human control strategies, and then,
apply them to a bipedal robot. This paper proposes a bioinspired control method to
realize stable walking with toe-off and heel-strike for a bipedal robot even after
disturbances. First, we analyze human walking and obtain some control strategies.
Then, we propose a pattern generator and a walking controller to mimic these
strategies. The pattern generator can predefine the zero-moment-point to plan the
center of mass trajectory and determine appropriate foot placement. The controller
adjusts torso acceleration to make the support leg compliant with the external
disturbances. The controller also achieves toe-off and heel-strike in cooperation
with the pattern generator. Finally, the validity of the proposed method is
confirmed through simulations and experiments.
C1 [Chen, Xuechao; Yu, Zhangguo; Zhang, Weimin; Huang, Qiang] Minist Educ, Sch
Mechatron Engn, Intelligent Robot Inst, Beijing 100081, Peoples R China.
[Chen, Xuechao; Yu, Zhangguo; Zhang, Weimin; Huang, Qiang] Minist Educ, Key Lab
Intelligent Control & Decis Complex Syst, Key Lab Biomimet Robots & Syst, Beijing
100081, Peoples R China.
[Chen, Xuechao; Yu, Zhangguo; Zhang, Weimin; Huang, Qiang] Beijing Inst Technol,
Beijing Adv Innovat Ctr Intelligent Robots & Syst, Beijing 100081, Peoples R China.
[Zheng, Yu] Univ Michigan Dearborn, Dearborn, MI 48128 USA.
[Ming, Aiguo] Univ Electrocommun, Tokyo 1828585, Japan.
[Ming, Aiguo] Beijing Inst Technol, Beijing 100081, Peoples R China.
RP Yu, ZG; Huang, Q (reprint author), Minist Educ, Sch Mechatron Engn, Intelligent
Robot Inst, Beijing 100081, Peoples R China.; Yu, ZG; Huang, Q (reprint author),
Minist Educ, Key Lab Intelligent Control & Decis Complex Syst, Key Lab Biomimet
Robots & Syst, Beijing 100081, Peoples R China.; Yu, ZG; Huang, Q (reprint author),
Beijing Inst Technol, Beijing Adv Innovat Ctr Intelligent Robots & Syst, Beijing
100081, Peoples R China.
EM chenxuechao@bit.edu.cn; yuzg@bit.edu.cn; zhwm@bit.edu.cn;
yuuzheng@umich.edu; qhuang@bit.edu.cn; ming@mce.uec.ac.jp
FU National Natural Science Foundation of China [61533004, 61320106012,
61375103, 61573063, 61321002, 61673069]; "111" Project [B08043]
FX This work was supported in part by the National Natural Science
Foundation of China under Grant 61533004, Grant 61320106012, Grant
61375103, Grant 61573063, Grant 61321002, and Grant 61673069, and in
part by the "111" Project under Grant B08043.
CR Chen XC, 2014, IEEE-RAS INT C HUMAN, P506, DOI 10.1109/HUMANOIDS.2014.7041409
Chen XC, 2014, ROBOTICA, V32, P467, DOI 10.1017/S0263574713000829
Cuccurullo S. J., 2004, GAIT ANAL PHYS MED R, P411
Dtrov D., 2011, P IEEE INT C ROB AUT, P3523
Erbatur K, 2009, IEEE T IND ELECTRON, V56, P835, DOI 10.1109/TIE.2008.2005150
Fu CL, 2008, IEEE T IND ELECTRON, V55, P2111, DOI 10.1109/TIE.2008.921205
Funato T, 2016, PLOS COMPUT BIOL, V12, DOI 10.1371/journal.pcbi.1004950
Geng T, 2014, IEEE T IND ELECTRON, V61, P2326, DOI 10.1109/TIE.2013.2272274
Heo JW, 2015, IEEE T IND ELECTRON, V62, P1091, DOI 10.1109/TIE.2014.2359418
Hong S, 2014, IEEE T IND ELECTRON, V61, P355, DOI 10.1109/TIE.2013.2242412
Huang Q, 2005, IEEE T ROBOT, V21, P977, DOI 10.1109/TRO.2005.851381
Huang QA, 2010, ROBOTICA, V28, P737, DOI 10.1017/S0263574709990439
Hyon SH, 2015, IEEE-RAS INT C HUMAN, P576, DOI 10.1109/HUMANOIDS.2015.7363420
Koolen T, 2012, INT J ROBOT RES, V31, P1094, DOI 10.1177/0278364912452673
Ohashi E, 2009, IEEE T IND ELECTRON, V56, P3964, DOI 10.1109/TIE.2009.2024098
Otani T., 2013, P IEEE INT C ROB AUT, P659
Roa M. A., 2013, BALANCE POSTURE CONT, P129
Shimmyo S, 2013, IEEE T IND ELECTRON, V60, P5137, DOI 10.1109/TIE.2012.2221111
Tlalolini D, 2011, IEEE-ASME T MECH, V16, P310, DOI 10.1109/TMECH.2010.2042458
Urata J, 2012, IEEE INT C INT ROBOT, P3411, DOI 10.1109/IROS.2012.6385840
Usherwood JR, 2012, J R SOC INTERFACE, V9, P2396, DOI 10.1098/rsif.2012.0179
YANG JF, 1990, BIOL CYBERN, V62, P321, DOI 10.1007/BF00201446
Yi SJ, 2016, IEEE-RAS INT C HUMAN, P395, DOI 10.1109/HUMANOIDS.2016.7803306
Yi SJ, 2012, IEEE INT C INT ROBOT, P4034, DOI 10.1109/IROS.2012.6385854
Yoshida Y, 2014, IEEE T SYST MAN CY-S, V44, P692, DOI 10.1109/TSMC.2013.2272612
NR 25
TC 0
Z9 0
U1 21
U2 21
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 0278-0046
EI 1557-9948
J9 IEEE T IND ELECTRON
JI IEEE Trans. Ind. Electron.
PD OCT
PY 2017
VL 64
IS 10
BP 7962
EP 7971
DI 10.1109/TIE.2017.2698361
PG 10
WC Automation & Control Systems; Engineering, Electrical & Electronic;
Instruments & Instrumentation
SC Automation & Control Systems; Engineering; Instruments & Instrumentation
GA FG4AR
UT WOS:000410160200034
DA 2018-01-22
ER

PT J
AU Miyamura, M
Sakamoto, T
Bai, X
Tsuji, Y
Morioka, A
Nebashi, R
Tada, M
Banno, N
Okamoto, K
Iguchi, N
Hada, H
Sugibayashi, T
Nagamatsu, Y
Ookubo, S
Shirai, T
Sugai, F
Inaba, M
AF Miyamura, Makoto
Sakamoto, Toshitsugu
Bai, Xu
Tsuji, Yukihide
Morioka, Ayuka
Nebashi, Ryusuke
Tada, Munehiro
Banno, Naoki
Okamoto, Koichiro
Iguchi, Noriyuki
Hada, Hiromitsu
Sugibayashi, Tadahiko
Nagamatsu, Yuya
Ookubo, Soichi
Shirai, Takuma
Sugai, Fumihito
Inaba, Masayuki
TI NanoBridge-Based FPGA in High-Temperature Environments
SO IEEE MICRO
LA English
DT Article
ID RELIABILITY
AB The authors evaluate NB-FPGA-a field-programmable gate array based on
NanoBridge, a novel resistive-change switch-for harsh-environment applications.
They implemented NB-FPGA in a humanoid robot and compared its performance with that
of a conventional FPGA. Results show that NB-FPGA exhibits small variation in
performance over a wide range of temperature and has high immunity for fluctuations
in the power supply voltage.
C1 [Miyamura, Makoto; Sakamoto, Toshitsugu; Bai, Xu; Tsuji, Yukihide; Morioka,
Ayuka; Nebashi, Ryusuke; Tada, Munehiro; Banno, Naoki; Okamoto, Koichiro; Iguchi,
Noriyuki; Hada, Hiromitsu; Sugibayashi, Tadahiko] NEC Corp Ltd, Syst Platform Res
Labs, Tokyo, Japan.
[Nagamatsu, Yuya] Univ Tokyo, Grad Sch Informat Sci & Technol, Dept
Mechanoinformat, Tokyo, Japan.
[Ookubo, Soichi; Shirai, Takuma; Sugai, Fumihito] Univ Tokyo, Tokyo, Japan.
[Inaba, Masayuki] Univ Tokyo, Grad Sch Informat Sci & Technol, Dept Creat
Informat, Tokyo, Japan.
RP Miyamura, M (reprint author), NEC Corp Ltd, Syst Platform Res Labs, Tokyo,
Japan.
EM m-miyamura@aj.jp.nec.com; t-sakamoto@dp.jp.nec.co.jp;
x-bai@bc.jp.nec.com; y-tsuji@az.jp.nec.com; a-morioka@ce.jp.nec.com;
r-nebashi@ak.jp.nec.com; m-tada@bl.jp.nec.com; banno@bu.jp.nec.com;
k-okamoto@fb.jp.nec.com; iguchi@aj.jp.nec.com; h-hada@bp.jp.nec.com;
t-sugibahashi@da.jp.nec.com; nagamatsu@jsk.imi.i.u-tokyo.ac.jp;
ookubo@jsk.imi.i.u-tokyo.ac.jp; t_shirai@jsk.t.u-tokyo.ac.jp;
sugai@jsk.imi.i.u-tokyo.ac.jp; inaba@jsk.imi.i.u-tokyo.ac.jp
FU Ministry of Economy, Trade and Industry (METI); New Energy and
Industrial Technology Development Organization (NEDO)
FX This work was performed as the "Ultra-Low Voltage Device Project" funded
and supported by the Ministry of Economy, Trade and Industry (METI) and
the New Energy and Industrial Technology Development Organization
(NEDO). Part of the device processing was conducted by the SCR Operation
Office of the National Institute of Advanced Industrial Science and
Technology (AIST), Japan.
CR Betz V., 1999, ARCHITECTURE CAD DEE
Han S., 2013, P 17 INT C SOL STAT, P16
Huang KJ, 2014, IEEE T CIRCUITS-I, V61, P2605, DOI 10.1109/TCSI.2014.2312499
Ibe E, 2010, IEEE T ELECTRON DEV, V57, P1527, DOI 10.1109/TED.2010.2047907
Ito Y., 2012, P IEEE RAS INT C HUM
Koga M., 2010, P IEEE REG 10 C, P21
Miyamura M., 2015, P FPGA 15, P236
Miyamura M., 2012, IEEE INT EL DEV M
Nakaya Shogo, 2012, Computer Architecture News, V40, P87, DOI
10.1145/2460216.2460232
Sakamoto T, 2003, APPL PHYS LETT, V82, P3032, DOI 10.1063/1.1572964
Suzuki D., 2015, P S VLSI CIRC, pC172
Tada M, 2012, IEEE T ELECTRON DEV, V59, P2357, DOI 10.1109/TED.2012.2204263
Tada M, 2011, IEEE T ELECTRON DEV, V58, P4398, DOI 10.1109/TED.2011.2169070
Takeuchi K., 2015, P 28 MICR WORKSH
Tsuji Y., 2015, IEEE S VLSI TECHN VL, P86
Vatankhahghadim A, 2015, IEEE T CIRCUITS-II, V62, P573, DOI
10.1109/TCSII.2015.2407711
Wen C.-Y, 2011, 2011 Symposium on VLSI Circuits. Digest of Technical Papers,
P302
Yasuda S., 2016, P IEEE S VLSI TECHN, P24
Young Yang Liauw, 2012, 2012 IEEE International Solid-State Circuits Conference
(ISSCC), P406, DOI 10.1109/ISSCC.2012.6177067
Zand R, 2016, IEEE T CIRCUITS-II, V63, P678, DOI 10.1109/TCSII.2016.2532099
Zhang H, 2016, IEEE T ELECTRON DEV, V63, P2999, DOI 10.1109/TED.2016.2572719
NR 21
TC 0
Z9 0
U1 2
U2 2
PU IEEE COMPUTER SOC
PI LOS ALAMITOS
PA 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1314 USA
SN 0272-1732
EI 1937-4143
J9 IEEE MICRO
JI IEEE Micro
PD SEP-OCT
PY 2017
VL 37
IS 5
BP 32
EP 42
DI 10.1109/MM.2017.3711648
PG 11
WC Computer Science, Hardware & Architecture; Computer Science, Software
Engineering
SC Computer Science
GA FJ8NM
UT WOS:000413024800005
DA 2018-01-22
ER

PT J
AU De Magistris, G
Pajon, A
Miossec, S
Kheddar, A
AF De Magistris, Giovanni
Pajon, Adrien
Miossec, Sylvain
Kheddar, Abderrahmane
TI Optimized humanoid walking with soft soles
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Compliant soles; Deformation estimator; Walking pattern generator;
Humanoid robots; Optimized walking gait
ID ROBOT; LOCOMOTION; HRP-2
AB In order to control more efficiently the feet-ground interaction of humanoid
robots during walking, we investigate adding outer soft (i.e. compliant) soles to
the feet. The deformation subsequent to the contact of the soles with the ground is
taken into account using a new walking pattern generator and deformation estimator.
This novel humanoid walking approach ensures that the desired zero moment point for
stability requirement is fulfilled. We validate our new controller using the HRP-4
humanoid robot performing walking experiments with and without the estimator. Also,
to test the robustness of our approach and to obtain low-energy walking, we
performed different walking motions. (C) 2017 Elsevier B.V. All rights reserved.
C1 [De Magistris, Giovanni; Kheddar, Abderrahmane] CNRS, UMI3218, AIST Joint Robot
Lab JRL, RL, Tokyo, Japan.
[De Magistris, Giovanni; Pajon, Adrien; Kheddar, Abderrahmane] Univ Montpellier,
CNRS, LIRMM, Interact Digital Human, Montpellier, France.
[Miossec, Sylvain] Univ Orleans, PRISME Lab, F-18020 Bourges, France.
RP De Magistris, G (reprint author), CNRS, UMI3218, AIST Joint Robot Lab JRL, RL,
Tokyo, Japan.
EM giova.dema85@gmail.com; adrien.pajon@gmail.com
FU EU [611909]
FX This research is supported in part from the EU 7th Framework Program
(FP7/2007-2013) under grant agreement No 611909 KoroiBot project
www.koroibot.eu.
CR ALART P, 1991, COMPUT METHOD APPL M, V92, P353, DOI 10.1016/0045-7825(91)90022-X
Bonnet M., 2014, FINITE ELEMENT METHO, P365
Bouyarmane K, 2012, ADV ROBOTICS, V26, P1099, DOI 10.1080/01691864.2012.686345
Bro-Nielsen M., 1996, COMPUT GRAPH FORUM, V15, P17
Bruneau O, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P512, DOI
10.1109/IROS.2001.973408
De Magistris G., 2016, INT C ROB AUT ICRA S
De Magistris G., 2016, ROBOT AUTON SYST RAS, P1
Duriez C, 2006, IEEE T VIS COMPUT GR, V12, P36, DOI 10.1109/TVCG.2006.13
Harada K., 2004, INT J HUMANOID ROB
Herdt A, 2010, ADV ROBOTICS, V24, P719, DOI 10.1163/016918610X493552
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Jean M., C CONT MECH SOUTH, P71
Jourdan F., 1998, COMPUTER METHODS APP, P33
Kajita S., 2008, SPRINGER HDB ROBOTIC, P361, DOI DOI 10.1007/978-3-540-30301-
5_17
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Moreau J.J., 1996, ENG SYSTEMS DESIGN A, P201
Morisawa M, 2006, IEEE-RAS INT C HUMAN, P581, DOI 10.1109/ICHR.2006.321332
Pajon Adrien, 2015, INT C ADV ROB ICAR I
Popovic MB, 2005, INT J ROBOT RES, V24, P1013, DOI 10.1177/0278364905058363
PRINCE F, 1994, GAIT POSTURE, V2, P19, DOI 10.1016/0966-6362(94)90013-2
Shin HK, 2014, IEEE T ROBOT, V30, P986, DOI 10.1109/TRO.2014.2305792
Vaillant J, 2016, AUTON ROBOT, V40, P561, DOI 10.1007/s10514-016-9546-4
Wieber Pierre-Brice, 2002, P INT WORKSH HUM FRI
Wieber PB, 2006, IEEE-RAS INT C HUMAN, P137, DOI 10.1109/ICHR.2006.321375
Yamaguchi J, 1996, MF '96 - 1996 IEEE/SICE/RSJ INTERNATIONAL CONFERENCE ON
MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS, P233, DOI
10.1109/MFI.1996.572183
NR 25
TC 1
Z9 1
U1 0
U2 0
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD SEP
PY 2017
VL 95
BP 52
EP 63
DI 10.1016/j.robot.2017.05.006
PG 12
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA FH4YE
UT WOS:000411167900005
DA 2018-01-22
ER

PT J
AU De Magistris, G
Miossec, S
Escande, A
Kheddar, A
AF De Magistris, Giovanni
Miossec, Sylvain
Escande, Adrien
Kheddar, Abderrahmane
TI Design of optimized soft soles for humanoid robots
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Shape optimizations; Compliant soles; Humanoid robots; Walking
AB We describe a methodology to design foot soles for a humanoid robot given
walking gait parameters (i.e. given center-of-mass and zero-moment-point
trajectories). In order to obtain an optimized compliant sole, we devised a shape
optimization framework which takes - among other inputs, an initial rough
(simplified) shape of the sole and refines it through successive optimization steps
under additional constraints and a cost function. The shape is optimized based on
the simulation of the sole deformation during an entire walking step, taking time
dependent input of the walking pattern generator into account. Our shape
optimization framework is able to minimize the impact of the foot with the ground
during the heel-strike phase and to limit foot rotation in case of perturbations.
Indeed, low foot rotation enforces a vertical posture and secures the balance of
the humanoid robot. Moreover, weight restriction (formulated as a constraint on the
sole volume) is added to our optimization problem. (C) 2017 Elsevier B.V. All
rights reserved.
C1 [De Magistris, Giovanni; Escande, Adrien; Kheddar, Abderrahmane] CNRS, UMI3218,
AIST Joint Robot Lab JRL, RL, Tsukuba, Ibaraki, Japan.
[De Magistris, Giovanni; Escande, Adrien; Kheddar, Abderrahmane] CNRS, LIRMM,
Interact Digital Human, UM, Montpellier, France.
[Miossec, Sylvain] Univ Orleans, PRISME Lab, F-18020 Bourges, France.
RP De Magistris, G (reprint author), CNRS, UMI3218, AIST Joint Robot Lab JRL, RL,
Tsukuba, Ibaraki, Japan.
EM giovanni_demagistris@hotmail.it
FU EU [611909]
FX The research leading to these results has received funding from the EU
7th Framework Program (FP7/2007-2013) under grant agreement no 611909
(KoroiBot).
CR ALART P, 1991, COMPUT METHOD APPL M, V92, P353, DOI 10.1016/0045-7825(91)90022-X
Alart P., 1993, RAIRO-MATH MODEL NUM, V27, P203
[Anonymous], 2003, MAT DAT BOOK
De Magistris G, 2017, ROBOT AUTON SYST, V95, P52, DOI
10.1016/j.robot.2017.05.006
DeMagistris Giovanni, 2016, INT C ROB AUT
Duriez C, 2006, IEEE T VIS COMPUT GR, V12, P36, DOI 10.1109/TVCG.2006.13
Hecht F., 2004, FREEFEM MANUAL
Jean Michel, 1993, C CONT MECH SOUTH, P71
Jean M., 1996, DOCUMENTATION LMGC L
Jourdan F., 1998, COMPUTER METHODS APP, P33
Moreau J.J., 1996, ENG SYSTEMS DESIGN A, P201
Pajon Adrien, 2015, INT C ADV ROB ICAR I
Yamane Katsu, 2009, 2009 9th IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2009), P230, DOI 10.1109/ICHR.2009.5379576
NR 13
TC 0
Z9 0
U1 3
U2 3
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD SEP
PY 2017
VL 95
BP 129
EP 142
DI 10.1016/j.robot.2017.05.005
PG 14
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA FH4YE
UT WOS:000411167900010
DA 2018-01-22
ER

PT J
AU Aymerich-Franch, L
Petit, D
Ganesh, G
Kheddar, A
AF Aymerich-Franch, Laura
Petit, Damien
Ganesh, Gowrishankar
Kheddar, Abderrahmane
TI Non-human Looking Robot Arms Induce Illusion of Embodiment
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Humanoid robot embodiment; Non-human looking robot; Body ownership
illusion; Rubber-hand illusion
ID RUBBER HAND ILLUSION; OWNERSHIP; BODY; TOUCH; LIMB
AB We examine whether non-human looking humanoid robot arms can be perceived as
part of one's own body. In two subsequent experiments, participants experienced
high levels of embodiment of a robotic arm that had a blue end effector with no
fingers (Experiment 1) and of a robotic arm that ended with a gripper (Experiment
2) when it was stroked synchronously with the real arm. Levels of embodiment were
significantly higher than the corresponding asynchronous condition and similar to
those reported for a human-looking arm. Additionally, we found that visuo-movement
synchronization also induced embodiment of the robot arm and that embodiment was
even partially maintained when the robot hand was covered with a blue plastic
cover. We conclude that humans are able to experience a strong sense of embodiment
towards non-human looking robot arms. The results have important implications for
the domains related to robotic embodiment.
C1 [Aymerich-Franch, Laura; Petit, Damien; Ganesh, Gowrishankar; Kheddar,
Abderrahmane] Natl Inst Adv Ind Sci & Technol, UMI3218 RL, CNRS AIST Joint Robot
Lab JRL, Cent 1,1-1-1 Umezono, Tsukuba, Ibaraki 3058560, Japan.
[Petit, Damien; Ganesh, Gowrishankar; Kheddar, Abderrahmane] CNRS UM LIRMM,
Interact Digital Human Grp, UMR5506, Montpellier, France.
[Aymerich-Franch, Laura] Natl Inst Adv Ind Sci & Technol, Cent 1,1-1-1 Umezono,
Tsukuba, Ibaraki 3058560, Japan.
RP Aymerich-Franch, L (reprint author), Natl Inst Adv Ind Sci & Technol, UMI3218
RL, CNRS AIST Joint Robot Lab JRL, Cent 1,1-1-1 Umezono, Tsukuba, Ibaraki 3058560,
Japan.; Aymerich-Franch, L (reprint author), Natl Inst Adv Ind Sci & Technol, Cent
1,1-1-1 Umezono, Tsukuba, Ibaraki 3058560, Japan.
EM laura.aymerich@gmail.com
FU European Union; Marie Curie IOF Fellowship project HumRobCooperation
[PIOF-CT-622764]; FP7 IP VERE [257695]; Kakenhi 'houga' Grant from the
Japan Society for the Promotion of Science (JSPS) [15616710]
FX This project has received funding from the European Union with the Marie
Curie IOF Fellowship project HumRobCooperation under Grant agreement No.
PIOF-CT-622764. It is also partially supported from the FP7 IP VERE No.
257695 and the Kakenhi 'houga' Grant 15616710 from the Japan Society for
the Promotion of Science (JSPS). We specially thank Prof. E. Yoshida for
his support in the ethical procedures and the interns at our laboratory
in Japan who collaborated for the pretest or to appear in the pictures.
CR Alimardani M, 2013, SCI REP-UK, V3, DOI 10.1038/srep02396
Armel KC, 2003, P ROY SOC B-BIOL SCI, V270, P1499, DOI 10.1098/rspb.2003.2364
Aymerich-Franch L, 2016, CONSCIOUS COGN, V46, P99, DOI
10.1016/j.concog.2016.09.017
Aymerich-Franch L, 2016, NEUROSCI RES, V104, P31, DOI
10.1016/j.neures.2015.11.001
Aymerich-Franch L, 2014, FRONT HUM NEUROSCI, V8, DOI 10.3389/fnhum.2014.00944
Aymerich-Franch L, 2010, CYBERPSYCH BEH SOC N, V13, P649, DOI
10.1089/cyber.2009.0412
Aymerich-Franch L, 2012, P INT SOC PRES RES A
Aymerich-Franch L, 2015, 2015 IEEE INT WORK A
Aymerich-Franch L, ENCY INFORM IN PRESS
Blanke O, 2009, TRENDS COGN SCI, V13, P7, DOI 10.1016/j.tics.2008.10.003
Botvinick M, 1998, NATURE, V391, P756, DOI 10.1038/35784
Connor KM, 2001, DEPRESS ANXIETY, V14, P137, DOI 10.1002/da.1055
Costantini M, 2007, CONSCIOUS COGN, V16, P229, DOI 10.1016/j.concog.2007.01.001
Ehrsson HH, 2004, SCIENCE, V305, P875, DOI 10.1126/science.1097011
Ehrsson H. H., 2012, NEW HDB MULTISENSORY, P775
Ferrari F, 2016, INT J SOC ROBOT, V8, P287, DOI 10.1007/s12369-016-0338-y
GREENWALD AG, 1976, PSYCHOL BULL, V83, P314, DOI 10.1037//0033-2909.83.2.314
Hellman RB, 2015, FRONT HUM NEUROSCI, V9, DOI 10.3389/fnhum.2015.00026
Hohwy J, 2010, PLOS ONE, V5, DOI 10.1371/journal.pone.0009416
Holmes NP, 2006, PERCEPT PSYCHOPHYS, V68, P685, DOI 10.3758/BF03208768
Kaneko K., 2004, IEEE INT C ROB AUT
Kantowitz BH, 2015, EXPT PSYCHOL
Kilteni K, 2012, PRESENCE-TELEOP VIRT, V21, P373, DOI 10.1162/PRES_a_00124
Lloyd DM, 2007, BRAIN COGNITION, V64, P104, DOI 10.1016/j.bandc.2006.09.013
Longo MR, 2008, COGNITION, V107, P978, DOI 10.1016/j.cognition.2007.12.004
Longo MR, 2009, ACTA PSYCHOL, V132, P166, DOI 10.1016/j.actpsy.2009.02.003
MacKenzie I. S., 2013, HUMAN COMPUTER INTER
Mansard N, 2009, 2009 INT C ADV ROB
Maselli A, 2013, FRONT HUM NEUROSCI, V7, DOI 10.3389/fnhum.2013.00083
Mori M., 1970, ENERGY, V7, P33, DOI DOI 10.1109/MRA.2012.2192811
Pavani F, 2007, PERCEPTION, V36, P1547, DOI 10.1068/p5853
Petit D, 2015, IEEE INT C ROB AUT
Romano D, 2015, NEUROPSYCHOLOGIA, V70, P414, DOI
10.1016/j.neuropsychologia.2014.10.033
SACKETT DL, 1979, J CHRON DIS, V32, P51, DOI 10.1016/0021-9681(79)90012-2
Sanchez-Vives MV, 2010, PLOS ONE, V5, DOI 10.1371/journal.pone.0010381
Seeley-Wait Elizabeth, 2009, Prim Care Companion J Clin Psychiatry, V11, P231,
DOI 10.4088/PCC.07m00576
Tsakiris M, 2005, J EXP PSYCHOL HUMAN, V31, P80, DOI 10.1037/0096-1523.31.1.80
Zlotowski J, 2015, INT J SOC ROBOT, V7, P347, DOI 10.1007/s12369-014-0267-6
NR 38
TC 0
Z9 0
U1 2
U2 2
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD SEP
PY 2017
VL 9
IS 4
BP 479
EP 490
DI 10.1007/s12369-017-0397-8
PG 12
WC Robotics
SC Robotics
GA FE7RY
UT WOS:000408405800003
DA 2018-01-22
ER

PT J
AU Ahmadi, A
Tani, J
AF Ahmadi, Ahmadreza
Tani, Jun
TI How can a recurrent neurodynamic predictive coding model cope with
fluctuation in temporal patterns? Robotic experiments on imitative
interaction
SO NEURAL NETWORKS
LA English
DT Article
DE Neuro-robotics; Predictive coding; Recurrent neural networks;
Synchronized imitation; Time-warping; Error regression
ID SELF-ORGANIZATION; MIRROR SYSTEM; BEHAVIOR; IMAGERY; CORTEX; BRAIN
AB The current paper examines how a recurrent neural network (RNN) model using a
dynamic predictive coding scheme can cope with fluctuations in temporal patterns
through generalization in learning. The conjecture driving this present inquiry is
that a RNN model with multiple timescales (MTRNN) learns by extracting patterns of
change from observed temporal patterns, developing an internal dynamic structure
such that variance in initial internal states account for modulations in
corresponding observed patterns. We trained a MTRNN with low-dimensional temporal
patterns, and assessed performance on an imitation task employing these patterns.
Analysis reveals that imitating fluctuated patterns consists in inferring optimal
internal states by error regression. The model was then tested through humanoid
robotic experiments requiring imitative interaction with human subjects. Results
show that spontaneous and lively interaction can be achieved as the model
successfully copes with fluctuations naturally occurring in human movement
patterns. (C) 2017 Elsevier Ltd. All rights reserved.
C1 [Ahmadi, Ahmadreza; Tani, Jun] Korea Adv Inst Sci & Technol, Cognit Neurorobot
Lab, Dept Elect Engn, N1 Bldg,291 Daehak Ro,373-1 Guseong Dong, Daejeon 305701,
South Korea.
[Tani, Jun] Grad Univ, Okinawa Inst Sci & Technol, Cognit Neurorobot Res Unit,
1919-1 Tancha, Onnason, Okinawa 9040495, Japan.
RP Tani, J (reprint author), Korea Adv Inst Sci & Technol, Cognit Neurorobot Lab,
Dept Elect Engn, N1 Bldg,291 Daehak Ro,373-1 Guseong Dong, Daejeon 305701, South
Korea.
EM ar.ahmadi62@gmail.com; tani1216jp@gmail.com
RI Tani, Jun/H-3681-2012
FU ICT R&D program of MSIP/IITP [2016(R7117-16-0163)]
FX This work was supported by the ICT R&D program of MSIP/IITP.
[2016(R7117-16-0163), Intelligent Processor Architectures and
Application Software for CNN-RNN].
CR Arie H, 2012, ROBOT AUTON SYST, V60, P729, DOI 10.1016/j.robot.2011.11.005
BILLARD A, 2002, COM ADAP SY, P281
Bishop C. M., 2006, INFORM SCI STAT PATT
Demiris J, 2002, FROM ANIM ANIMAT, P327
Deshmukh O., 2002, IEEE INT C AC SPEECH, V1, P1
Dillmann R., 1995, INT S INT ROB SYST, P185
DUFAY B, 1984, INT J ROBOT RES, V3, P3, DOI 10.1177/027836498400300401
Friston KJ, 2011, BIOL CYBERN, V104, P137, DOI 10.1007/s00422-011-0424-z
Friston KJ, 2010, BIOL CYBERN, V102, P227, DOI 10.1007/s00422-010-0364-z
Friston KJ, 2010, NAT REV NEUROSCI, V11, P127, DOI 10.1038/nrn2787
Gerstner W., 2002, SPIKING NEURON MODEL
Graves A., 2012, ARXIV12113711
Hopfield JJ, 2001, P NATL ACAD SCI USA, V98, P1282, DOI 10.1073/pnas.031567098
Ito M, 2004, ADAPT BEHAV, V12, P93, DOI 10.1177/105971230401200202
JEANNEROD M, 1994, BEHAV BRAIN SCI, V17, P187, DOI 10.1017/S0140525X00034026
LECUN Y, 1989, CONNECTIONISM IN PERSPECTIVE, P143
LeCun Yann A., 2012, Neural Networks: Tricks of the Trade. Second Edition: LNCS
7700, P9, DOI 10.1007/978-3-642-35289-8_3
Levas A., 1984, P IEEE INT C ROB AUT, V1, P413
Murata S., 2015, IEEE T NEURAL NETWOR, P1
Murata S, 2013, IEEE T AUTON MENT DE, V5, P298, DOI 10.1109/TAMD.2013.2258019
Nishimoto R, 2009, PSYCHOL RES-PSYCH FO, V73, P545, DOI 10.1007/s00426-009-0236-
0
Oates T., 1999, P IJCAI 99 WORKSH NE, P17
Rao RPN, 1999, NAT NEUROSCI, V2, P79, DOI 10.1038/4580
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
Rumelhart D.E., 1986, PARALLEL DISTRIBUTED, V1, P318
Segre A.M., 1985, P IEEE INT C ROB AUT, V2, P555
Tani J, 1996, IEEE T SYST MAN CY B, V26, P421, DOI 10.1109/3477.499793
Tani J, 2004, NEURAL NETWORKS, V17, P1273, DOI 10.1016/j.neunet.2004.05.007
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
Vakanski A, 2012, IEEE T SYST MAN CY B, V42, P1039, DOI
10.1109/TSMCB.2012.2185694
Werbos P.J., 1974, THESIS
Yamashita Y, 2008, PLOS COMPUT BIOL, V4, DOI 10.1371/journal.pcbi.1000220
NR 32
TC 1
Z9 1
U1 8
U2 8
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 0893-6080
EI 1879-2782
J9 NEURAL NETWORKS
JI Neural Netw.
PD AUG
PY 2017
VL 92
SI SI
BP 3
EP 16
DI 10.1016/j.neunet.2017.02.015
PG 14
WC Computer Science, Artificial Intelligence; Neurosciences
SC Computer Science; Neurosciences & Neurology
GA EZ1YB
UT WOS:000404505600002
PM 28385623
DA 2018-01-22
ER

PT J
AU TermehYousefi, A
Azhari, S
Khajeh, A
Hamidon, MN
Tanaka, H
AF TermehYousefi, Amin
Azhari, Saman
Khajeh, Amin
Hamidon, Mohd Nizar
Tanaka, Hirofumi
TI Development of haptic based piezoresistive artificial fingertip: Toward
efficient tactile sensing systems for humanoids
SO MATERIALS SCIENCE & ENGINEERING C-MATERIALS FOR BIOLOGICAL APPLICATIONS
LA English
DT Article
DE Artificial finger; Haptic sensors; Nanocomposite; Modeling; Harvesting
robot
ID HUMAN-ROBOT INTERACTION; CARBON NANOTUBES; TOMATO; CELLS
AB Haptic sensors are essential devices that facilitate human-like sensing systems
such as implantable medical devices and humanoid robots. The availability of
conducting thin films with haptic properties could lead to the development of
tactile sensing systems that stretch reversibly, sense pressure (not just touch),
and integrate with collapsible. In this study, a nanocomposite based hemispherical
artificial fingertip fabricated to enhance the tactile sensing systems of humanoid
robots. To validate the hypothesis, proposed method was used in the robot-like
finger system to classify the ripe and unripe tomato by recording the metabolic
growth of the tomato as a function of resistivity change during a controlled
indention force. Prior to fabrication, a finite element modeling (FEM) was
investigated for tomato to obtain the stress distribution and failure point of
tomato by applying different external loads. Then, the extracted computational
analysis information was utilized to design and fabricate nano composite based
artificial fingertip to examine the maturity analysis of tomato. The obtained
results demonstrate that the fabricated conformable and scalable artificial
fingertip shows different electrical property for ripe and unripe tomato. The
artificial fingertip is compatible with the development of brain-like systems for
artificial skin by obtaining periodic response during an applied load. (C) 2017
Published by Elsevier B.V.
C1 [TermehYousefi, Amin; Tanaka, Hirofumi] Kyushu Inst Technol, Dept Human
Intelligence Syst, Kitakyushu, Fukuoka 8080196, Japan.
[Azhari, Saman; Hamidon, Mohd Nizar] Univ Putra Malaysia, Inst Adv Technol,
Serdang, Malaysia.
[Khajeh, Amin] Univ Putra Malaysia, Dept Aerosp Engn, Serdang 43400, Selangor,
Malaysia.
RP TermehYousefi, A (reprint author), Kyushu Inst Technol, Dept Human Intelligence
Syst, Kitakyushu, Fukuoka 8080196, Japan.
EM at.tyousfi@gmail.com
OI Hamidon, Mohd Nizar/0000-0001-9189-4465; Tanaka,
Hirofumi/0000-0002-4378-5747
FU Kitakyushu City [24510150, 15H03531]; Ministry of Education, Culture,
Science, Sports, and Technology (MEXT) of Japan [25110002]
FX This work was supported by the grant from Kitakyushu City, Grants-in-Aid
for Scientific Research (No. 24510150 and 15H03531) and for Scientific
Research on Innovative Areas (No. 25110002) from the Ministry of
Education, Culture, Science, Sports, and Technology (MEXT) of Japan.
CR Althaus P, 2004, IEEE INT CONF ROBOT, P1894, DOI 10.1109/ROBOT.2004.1308100
Cantoro M, 2006, NANO LETT, V6, P1107, DOI 10.1021/nl060068y
Celik HK, 2011, J FOOD ENG, V104, P293, DOI 10.1016/j.jfoodeng.2010.12.020
Cowan AK, 2001, PHYSIOL PLANTARUM, V111, P127, DOI 10.1034/j.1399-
3054.2001.1110201.x
De Santis A, 2008, MECH MACH THEORY, V43, P253, DOI
10.1016/j.mechmachtheory.2007.03.003
Ihueze C. C., 2013, P WORLD C ENG
Jockusch J, 1997, IEEE INT CONF ROBOT, P3080, DOI 10.1109/ROBOT.1997.606756
Kabas O, 2008, J SCI FOOD AGR, V88, P1537, DOI 10.1002/jsfa.3246
Kemp CC, 2007, IEEE ROBOT AUTOM MAG, V14, P20, DOI 10.1109/MRA.2007.339604
Khatib O., 2004, INT J HUM ROBOT, V1, P29, DOI [10.1142/S0219843604000058, DOI
10.1142/S0219843604000058]
LABUZA TP, 1989, J FOOD PROCESS PRES, V13, P1, DOI 10.1111/j.1745-
4549.1989.tb00090.x
Li Z., 2010, AFR J BIOTECHNOL, V9
Lipomi DJ, 2011, NAT NANOTECHNOL, V6, P788, DOI [10.1038/NNANO.2011.184,
10.1038/nnano.2011.184]
Ma PC, 2010, COMPOS PART A-APPL S, V41, P1345, DOI
10.1016/j.compositesa.2010.07.003
MacLeod R.F., 1976, DAMAGE FRESH TOMATOE, P11
Ohka M., 2006, 15 IEEE INT S ROB HU, P214
Riccardi M, 2016, SCI HORTIC-AMSTERDAM, V200, P25, DOI
10.1016/j.scienta.2015.12.049
Seyedabadi E, 2015, INT J FOOD PROP, V18, P2015, DOI
10.1080/10942912.2014.951892
Shigley J. E., 2011, SHIGLEYS MECH ENG DE
Strand L.R., 1998, INTEGRATED PEST MANA
Tadesse Y, 2006, FERROELECTRICS, V345, P13, DOI 10.1080/00150190601018010
TermehYousefi A, 2016, MAT SCI ENG C-MATER, V59, P636, DOI
10.1016/j.msec.2015.10.041
TermehYousefi H.T. Amin, 2015, J NANOSCI NANOTECHNO
Wang CX, 2004, ANN BOT-LONDON, V93, P443, DOI 10.1093/aob/mch062
Yousefi A.T., 2014, ADV MAT RES, P778
Yousefi A.T., 2015, CRYST GROWTH DES
Yun XP, 2006, IEEE T ROBOT, V22, P1216, DOI 10.1109/TRO.2006.886270
Zdunek A, 2013, SENSORS-BASEL, V13, P12175, DOI 10.3390/s130912175
NR 28
TC 1
Z9 1
U1 16
U2 20
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0928-4931
EI 1873-0191
J9 MAT SCI ENG C-MATER
JI Mater. Sci. Eng. C-Mater. Biol. Appl.
PD AUG 1
PY 2017
VL 77
BP 1098
EP 1103
DI 10.1016/j.msec.2017.04.040
PG 6
WC Materials Science, Biomaterials
SC Materials Science
GA EX6VW
UT WOS:000403381200124
PM 28531983
DA 2018-01-22
ER

PT J
AU Shimomura, K
Araki, F
Kono, Y
Asai, Y
Murakami, T
Hyodo, T
Okumura, M
Matsumoto, K
Monzen, H
Nishimura, Y
AF Shimomura, Kohei
Araki, Fujio
Kono, Yuki
Asai, Yoshiyuki
Murakami, Takamichi
Hyodo, Tomoko
Okumura, Masahiko
Matsumoto, Kenji
Monzen, Hajime
Nishimura, Yasumasa
TI Identification of elemental weight fraction and mass density of humanoid
tissue-equivalent materials using dual energy computed tomography
SO PHYSICA MEDICA-EUROPEAN JOURNAL OF MEDICAL PHYSICS
LA English
DT Article
DE Dual energy CT; Elemental weight fraction; Mass density; Monte Carlo
simulation
ID CARLO DOSE CALCULATIONS; CT; BRACHYTHERAPY; RADIOTHERAPY; CONVERSION
AB The purpose of this study was to obtain the fraction by weight of the elemental
composition and mass density of a humanoid tissue phantom that includes lung
tissue, soft tissue, and bone tissue, by using dual energy computed tomography
(DECT). The fraction by weight and the mass density for tissue-equivalent materials
were calculated by means of a least-squares method with a linear attenuation
coefficient, using monochromatic photon energies of 10-140 keV, as obtained from
DECT. The accuracy of calculated values for the fractions by weight of H
(hydrogen), C (carbon), N (nitrogen), and O (oxygen) as verified by comparing the
values with those that were analyzed using the combustion technique. The fraction
by weight for other elements was confirmed by comparing with the analyzed values by
means of energy dispersive photon spectroscopy. The calculated mass densities for
each tissue were compared with those that were obtained by dividing the weight by
volume. The calculated values of the fraction by weight that were obtained by means
of DECT had differences of 1.9%, 9.2%, 6.6%, 7.8%, 0.8%, and 0.2% at a maximum for
H, C, N, O, P (phosphorus), and Ca (calcium), respectively, from the reference
values analyzed by the combustion technique and energy dispersive photon
spectroscopy. The difference in the mass density for tissue was 0.011 g/cm(3) at a
maximum. This study demonstrated the fraction by weight and the mass density of the
humanoid tissue-equivalent materials that were calculated by means of DECT were
expected high accuracy. (C) 2017 Associazione Italiana di Fisica Medica. Published
by Elsevier Ltd. All rights reserved.
C1 [Shimomura, Kohei] Kumamoto Univ, Grad Sch Hlth Sci, Chuo Ku, 4-24-1 Kuhonji,
Kumamoto 8620976, Japan.
[Araki, Fujio] Kumamoto Univ, Fac Life Sci, Dept Hlth Sci, 4-24-1 Kuhonji,
Kumamoto 8620976, Japan.
[Shimomura, Kohei; Kono, Yuki; Asai, Yoshiyuki; Okumura, Masahiko; Matsumoto,
Kenji] Kindai Univ, Fac Med, Cent Radiol Serv, 377-2 Ohno Higashi, Osaka, Osaka
5898511, Japan.
[Murakami, Takamichi; Hyodo, Tomoko] Kindai Univ, Fac Med, Dept Radiol, 377-2
Ohno Higashi, Osaka, Osaka 5898511, Japan.
[Monzen, Hajime; Nishimura, Yasumasa] Kindai Univ, Fac Med, Dept Radiat Oncol,
377-2 Ohno Higashi, Osaka, Osaka 5898511, Japan.
[Kono, Yuki] Kanazawa Univ, Grad Sch Med Sci, Div Hlth Sci, 5-11-80 Kodatsuno,
Kanazawa, Ishikawa 9200942, Japan.
RP Araki, F (reprint author), Kumamoto Univ, Fac Life Sci, Dept Hlth Sci, 4-24-1
Kuhonji, Kumamoto 8620976, Japan.
EM f_araki@kumamoto-u.ac.jp
FU Japan Society of Medical Physics; ractical Research for Innovative
Cancer Control from Japan Agency for Medical Research and Development
(AMED) [15ck0106093h0002]; [24791346]; [26461808]
FX This study was supported by grant of the Japan Society of Medical
Physics for the years 2013-2014, by Grants-in-Aid for Scientific
Research (JSPS) KAKENHI [Grant Number 24791346, 26461808] in Japan, and
by the Practical Research for Innovative Cancer Control from Japan
Agency for Medical Research and Development (AMED) [Grant Number
15ck0106093h0002].
CR ALVAREZ RE, 1976, PHYS MED BIOL, V21, P733, DOI 10.1088/0031-9155/21/5/002
Bazalova M, 2008, PHYS MED BIOL, V53, P2439, DOI 10.1088/0031-9155/53/9/015
Beaulieu L, 2012, MED PHYS, V39, P6208, DOI 10.1118/1.4747264
Berger M. J., 2009, XCOM PHOTON CROSS SE
Chandra N, 2011, MED RADIOL DIAGN IMA, P35, DOI 10.1007/174_2010_35
Chang KP, 2012, MED PHYS, V39, P2013, DOI 10.1118/1.3694097
Fogliata A, 2011, RADIAT ONCOL, V6, DOI 10.1186/1748-717X-6-82
Goodsitt MM, 2011, MED PHYS, V38, P2222, DOI 10.1118/1.3567509
Hunemohr N, 2014, MED PHYS, V41, DOI 10.1118/1.4875976
ICRU, 1989, 44 ICRU
ICRP, 1975, 23 ICRP
ISO, 2006, 845 ISO
Jiang HY, 2007, MED PHYS, V34, P1439, DOI 10.1118/1.2715481
KALENDER WA, 1986, MED PHYS, V13, P334, DOI 10.1118/1.595958
Landry G, 2010, MED PHYS, V37, P5188, DOI 10.1118/1.3477161
Santamaria-Pang A, P SPIE
Singh S, 2010, RADIOLOGY, V257, P373, DOI 10.1148/radiol.10092212
Vassiliev ON, 2010, PHYS MED BIOL, V55, P581, DOI 10.1088/0031-9155/55/3/002
WATSON PE, 1980, AM J CLIN NUTR, V33, P27
Yagi M, 2013, J APPL CLIN MED PHYS, V14, P173, DOI 10.1120/jacmp.v14i5.4335
NR 20
TC 0
Z9 0
U1 0
U2 0
PU ELSEVIER SCI LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, OXON, ENGLAND
SN 1120-1797
EI 1724-191X
J9 PHYS MEDICA
JI Phys. Medica
PD JUL
PY 2017
VL 39
BP 59
EP 66
DI 10.1016/j.ejmp.2017.05.060
PG 8
WC Radiology, Nuclear Medicine & Medical Imaging
SC Radiology, Nuclear Medicine & Medical Imaging
GA FA5OO
UT WOS:000405493200008
PM 28623025
DA 2018-01-22
ER

PT J
AU Tidoni, E
Gergondet, P
Fusco, G
Kheddar, A
Aglioti, SM
AF Tidoni, Emmanuele
Gergondet, Pierre
Fusco, Gabriele
Kheddar, Abderrahmane
Aglioti, Salvatore M.
TI The Role of Audio-Visual Feedback in a Thought-Based Control of a
Humanoid Robot: A BCI Study in Healthy and Spinal Cord Injured People
SO IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING
LA English
DT Article
DE Body ownership; brain-computer interface (BCI); humanoid; spinal cord
injury; steady state visually evoked potentials (SSVEP); teleoperation
ID BRAIN-COMPUTER-INTERFACE; VISUAL-EVOKED POTENTIALS; BODY OWNERSHIP
ILLUSION; ACTUATED WHEELCHAIR; CLASSIFICATION; TETRAPLEGICS; STANDARDS;
P300
AB The efficient control of our body and successful interaction with the
environment are possible through the integration of multisensory information.
Brain-computer interface (BCI) may allow people with sensorimotor disorders to
actively interact in the world. In this study, visual information was paired with
auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and
spinal cord injured (SCI) people were asked to embody a humanoid robot and complete
a pick-and-place task by means of a visual evoked potentials BCI system.
Participants observed the remote environment from the robot's perspective through a
head mounted display. Human-footsteps and computer-beep sounds were used as
synchronous/asynchronous auditory feedback. Healthy participants achieved better
placing accuracy when listening to human footstep sounds relative to a computer-
generated sound. SCI people demonstrated more difficulty in steering the robot
during asynchronous auditory feedback conditions. Importantly, subjective reports
highlighted that the BCI mask overlaying the display did not limit the observation
of the scenario and the feeling of being in control of the robot. Overall, the data
seem to suggest that sensorimotor-related information may improve the control of
external devices. Further studies are required to understand how the contribution
of residual sensory channels could improve the reliability of BCI systems.
C1 [Tidoni, Emmanuele; Fusco, Gabriele; Aglioti, Salvatore M.] Univ Roma La
Sapienza, Dept Psychol, I-00185 Rome, Italy.
[Tidoni, Emmanuele; Fusco, Gabriele; Aglioti, Salvatore M.] Fdn Santa Lucia, I-
00142 Rome, Italy.
[Gergondet, Pierre] CNRS AIST Joint Robot Lab, Tsukuba RL UMI3218, Tsukuba,
Ibaraki, Japan.
[Kheddar, Abderrahmane] UM CNRS LIRMM, Interact Digital Humans, F-34095
Montpellier, France.
RP Tidoni, E (reprint author), Univ Roma La Sapienza, Dept Psychol, I-00185 Rome,
Italy.
EM emmanuele.tidoni@uniroma1.it; pierre.gergondet@gmail.com;
gabriele.fusco@uniroma1.it; kheddar@gmail.com;
salvatoremaria.aglioti@uniroma1.it
RI tidoni, emmanuele/I-3189-2014
OI tidoni, emmanuele/0000-0001-9079-2862
FU EU Information and Communication Technologies Grant (VERE project)
[FP7-ICT-2009-5, 257695]; Italian Ministry of Health [RF-2010-2312912];
BIAL Foundation [2014/150]
FX The research was funded by the EU Information and Communication
Technologies Grant (VERE project, FP7-ICT-2009-5, Prot. Num. 257695) and
the Italian Ministry of Health (RF-2010-2312912) to SMA. ET was
supported by BIAL Foundation (2014/150).
CR Aglioti SM, 2011, TRENDS COGN SCI, V15, P47, DOI 10.1016/j.tics.2010.12.003
Alimardani M, 2014, FRONT SYST NEUROSCI, V8, DOI 10.3389/fnsys.2014.00052
An XW, 2014, PLOS ONE, V9, DOI 10.1371/journal.pone.0111070
Aspell JE, 2009, PLOS ONE, V4, DOI 10.1371/journal.pone.0006488
Aziz-Zadeh L, 2004, EUR J NEUROSCI, V19, P2609, DOI 10.1111/j.1460-
9568.2004.03348.x
Belitski A, 2011, J NEURAL ENG, V8, DOI 10.1088/1741-2560/8/2/025022
Bell CJ, 2008, J NEURAL ENG, V5, P214, DOI 10.1088/1741-2560/5/2/012
Brouwer A-M, 2010, SRX NEUROSCIENCE, V2010, P1, DOI DOI 10.3814/2010/967027
Campanella S, 2010, CLIN NEUROPHYSIOL, V121, P1855, DOI
10.1016/j.clinph.2010.04.004
Cohen O, 2014, J NEURAL ENG, V11, DOI 10.1088/1741-2560/11/3/035006
Crawford JR, 2010, COGN NEUROPSYCHOL, V27, P245, DOI
10.1080/02643294.2010.513967
Curtin A, 2012, IEEE ENG MED BIO, P3841, DOI 10.1109/EMBC.2012.6346805
Daachi B, 2015, INT C REHAB ROBOT, P1020, DOI 10.1109/ICORR.2015.7281338
Diez PF, 2011, J NEUROENG REHABIL, V8, DOI 10.1186/1743-0003-8-39
Doud AJ, 2011, PLOS ONE, V6, DOI 10.1371/journal.pone.0026322
Escolano C., 2009, P IEEE INT C ROB AUT, P4430
Escolano C, 2012, IEEE T SYST MAN CY B, V42, P793, DOI
10.1109/TSMCB.2011.2177968
Escolano C, 2010, IEEE ENG MED BIO, P4476, DOI 10.1109/IEMBS.2010.5626045
Fisher RS, 2005, EPILEPSIA, V46, P1426, DOI 10.1111/j.1528-1167.2005.31405.x
Friedman D, 2007, PRESENCE-TELEOP VIRT, V16, P100, DOI 10.1162/pres.16.1.100
Friedman D, 2014, FRONT PSYCHOL, V5, DOI 10.3389/fpsyg.2014.00943
Friman O, 2007, IEEE T BIO-MED ENG, V54, P742, DOI 10.1109/TBME.2006.889160
Galan F, 2008, CLIN NEUROPHYSIOL, V119, P2159, DOI 10.1016/j.clinph.2008.06.001
Gergondet P., 2013, EXP ROBOT, V88, P215, DOI [10.1007/978-3-319-00065-7_16, DOI
10.1007/978-3-319-00065-7_16]
Gergondet P, 2015, BRAIN-COMPUT INTERFA, V2, P11, DOI
10.1080/2326263X.2015.1051432
Grisoni L, 2016, CEREB CORTEX, V26, P2353, DOI 10.1093/cercor/bhw026
Heydrich L, 2013, FRONT PSYCHOL, V4, DOI 10.3389/fpsyg.2013.00946
Iturrate I, 2009, IEEE T ROBOT, V25, P614, DOI 10.1109/TRO.2009.2020347
Kaufmann T, 2011, J NEURAL ENG, V8, DOI 10.1088/1741-2560/8/5/056016
King CE, 2013, J NEUROENG REHABIL, V10, DOI 10.1186/1743-0003-10-77
Kirshblum SC, 2011, J SPINAL CORD MED, V34, P547, DOI
10.1179/107902611X13186000420242
Kokkinara E, 2014, PERCEPTION, V43, P43, DOI 10.1068/p7545
LaFleur K, 2013, J NEURAL ENG, V10, DOI 10.1088/1741-2560/10/4/046003
Lebedev MA, 2006, TRENDS NEUROSCI, V29, P536, DOI 10.1016/j.tins.2006.07.004
Leeb R, 2007, COMPUT INTEL NEUROSC, V2007, P79642, DOI DOI 10.1155/2007/79642
Leeb R., 2004, P 26 ANN INT C IEEE, V2, P4503
Leeb R, 2013, ARTIF INTELL MED, V59, P121, DOI 10.1016/j.artmed.2013.08.004
Lenggenhager B, 2012, PLOS ONE, V7, DOI 10.1371/journal.pone.0050757
Leonardis D., 2012, P IEEE HAPT S HAPTIC, V23, P421
Leonardis D, 2014, PRESENCE-TELEOP VIRT, V23, P253, DOI 10.1162/PRES_a_00190
Maselli A, 2013, FRONT HUM NEUROSCI, V7, DOI 10.3389/fnhum.2013.00083
McFarland DJ, 2008, J NEURAL ENG, V5, P101, DOI 10.1088/1741-2560/5/2/001
Moore MM, 2003, IEEE T NEUR SYS REH, V11, P162, DOI 10.1109/TNSRE.2003.814433
Moy AJ, 2013, PLOS ONE, V8, DOI 10.1371/journal.pone.0053753
Nishifuji S, 2015, IEEE ENG MED BIO, P1918, DOI 10.1109/EMBC.2015.7318758
Onose G, 2012, SPINAL CORD, V50, P599, DOI 10.1038/sc.2012.14
Pazzaglia M, 2008, CURR BIOL, V18, P1766, DOI 10.1016/j.cub.2008.09.061
Pernigo S, 2012, EUR J NEUROSCI, V36, P3509, DOI 10.1111/j.1460-
9568.2012.08266.x
Pfurtscheller G, 2006, NEUROIMAGE, V31, P153, DOI
10.1016/j.neuroimage.2005.12.003
Piccione F, 2006, CLIN NEUROPHYSIOL, V117, P531, DOI
10.1016/j.clinph.2005.07.024
Prueckl R, 2009, LECT NOTES COMPUT SC, V5517, P690, DOI 10.1007/978-3-642-02478-
8_86
Ramos-Murguialday A, 2012, PLOS ONE, V7, DOI 10.1371/journal.pone.0047048
Royer AS, 2010, IEEE T NEUR SYS REH, V18, P581, DOI 10.1109/TNSRE.2010.2077654
Rupp R., 2014, FRONT NEUROENG, V7, P1
Sakurada T, 2015, CLIN NEUROPHYSIOL, V126, P1972, DOI
10.1016/j.clinph.2014.12.010
Sanchez-Vives MV, 2010, PLOS ONE, V5, DOI 10.1371/journal.pone.0010381
Scandola M, 2014, FRONT HUM NEUROSCI, V8, DOI 10.3389/fnhum.2014.00404
Slater M, 2000, PRESENCE-TELEOP VIRT, V9, P413, DOI 10.1162/105474600566925
Slater M, 2009, PHILOS T R SOC B, V364, P3549, DOI 10.1098/rstb.2009.0138
Suminski AJ, 2010, J NEUROSCI, V30, P16777, DOI 10.1523/JNEUROSCI.3967-10.2010
Thurlings ME, 2012, J NEURAL ENG, V9, DOI 10.1088/1741-2560/9/4/045005
Tidoni E, 2015, NEUROPSYCHOLOGIA, V79, P301, DOI
10.1016/j.neuropsychologia.2015.06.029
Tidoni E, 2014, RESTOR NEUROL NEUROS, V32, P611, DOI 10.3233/RNN-130385
Tidoni E, 2014, FRONT NEUROROBOTICS, V8, DOI 10.3389/fnbot.2014.00020
Van der Burg E., 1804, P ROY SOC LOND B BIO, V282
WHO, 2013, INTERNATIONAL PERSPECTIVES ON SPINAL CORD INJURY, P1
Wolpaw J R, 1998, IEEE Trans Rehabil Eng, V6, P326, DOI 10.1109/86.712231
Wolpaw JR, 2004, P NATL ACAD SCI USA, V101, P17849, DOI 10.1073/pnas.0403504101
Xu Ren, 2014, Front Neuroeng, V7, P35, DOI 10.3389/fneng.2014.00035
Yao L, 2015, J NEURAL ENG, V12, DOI 10.1088/1741-2560/12/1/016005
Zhao JJ, 2015, PLOS ONE, V10, DOI 10.1371/journal.pone.0123694
NR 71
TC 0
Z9 0
U1 17
U2 17
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1534-4320
EI 1558-0210
J9 IEEE T NEUR SYS REH
JI IEEE Trans. Neural Syst. Rehabil. Eng.
PD JUN
PY 2017
VL 25
IS 6
BP 772
EP 781
DI 10.1109/TNSRE.2016.2597863
PG 10
WC Engineering, Biomedical; Rehabilitation
SC Engineering; Rehabilitation
GA EY8VE
UT WOS:000404275500027
PM 28113631
DA 2018-01-22
ER

PT J
AU Li, X
Minami, M
Matsuno, T
Izawa, D
AF Li, Xiang
Minami, Mamoru
Matsuno, Takayuki
Izawa, Daiji
TI Visual Lifting Approach for Bipedal Walking with Slippage
SO JOURNAL OF ROBOTICS AND MECHATRONICS
LA English
DT Article
DE humanoid; bipedal walking; visual lifting approach
ID ROBOT
AB Biped locomotion generated by control methods based on Zero-Moment Point (ZMP)
has been achieved and its efficacy for stable walking, where ZMP-based walking does
not include the falling state, has been verified extensively. The walking control
that does not depend on ZMP - we call it dynamical walking - can be used in walking
that utilizes kicks by toes, which looks natural but is vulnerable to turnover.
Therefore, keeping the walking of dynamicalmotion stable is indispensable to the
realization of human-like natural walking - the authors perceive the human walking,
which includes toe off states, as natural walking. Our research group has developed
a walking model, which includes slipping, impact, surface-contacting and line-
contacting of foot. This model was derived from the Newton-Euler (NE) method. The
"Visual Lifting Approach" (VLA) strategy inspired from human walking motion
utilizing visual perception, was used in order to enhance robust walking and
prevent the robot from falling, without utilizing ZMP. The VLA consists of walking
gate generation visual lifting feedback and feedforward. In this study, simulation
results confirmed that bipedal walking dynamics, which include a slipping state
between foot and floor, converge to a stable walking limit cycle.
C1 [Li, Xiang; Minami, Mamoru; Matsuno, Takayuki; Izawa, Daiji] Okayama Univ, Grad
Sch Nat Sci & Technol, Kita Ku, 3-1-1 Tsushima Naka, Okayama, Okayama 7008530,
Japan.
RP Li, X (reprint author), Okayama Univ, Grad Sch Nat Sci & Technol, Kita Ku, 3-1-1
Tsushima Naka, Okayama, Okayama 7008530, Japan.
EM pzkm87r2@s.okayama-u.ac.jp
CR Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Harada Y, 2010, IEEE INT C INT ROBOT, P3623, DOI 10.1109/IROS.2010.5651546
HOGAN N, 1985, J DYN SYST-T ASME, V107, P1, DOI 10.1115/1.3140702
Huang Y, 2010, IEEE INT C INT ROBOT, P4077, DOI 10.1109/IROS.2010.5650421
Kouchi M., 2000, ANTHROPOMETRIC DATAB
Li X., 2016, J ADV COMPUTATIONAL, V20
Maeba T, 2012, 2012 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT
MECHATRONICS (AIM), P7, DOI 10.1109/AIM.2012.6265962
Nakamura Y., 2000, J RSJ, V18, P435
PENG D, 1987, IEEE T COMPUT, V36, P500
Pratt J, 1997, IEEE INT CONF ROBOT, P193, DOI 10.1109/ROBOT.1997.620037
Song W., 2011, P IEEE INT C ROB AUT, P5210
Song W., 2011, P 2011 IEEE RAS INT, P345
Song W, 2012, INT J ADV ROBOT SYST, V9, DOI 10.5772/52450
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
Vukobratovic M., 1970, IEEE T BIOMEDICAL EN, V17
Westervelt ER, 2003, IEEE T AUTOMAT CONTR, V48, P42, DOI 10.1109/TAC.2002.806653
Wu TY, 2010, IEEE INT C INT ROBOT, P4922, DOI 10.1109/IROS.2010.5651490
Yamane K, 2003, IEEE T ROBOTIC AUTOM, V19, P421, DOI 10.1109/TRA.2003.810579
Yanou A., 2013, P 39 ANN C IEEE IND, P6357
Yu F., 2010, P IEEE RSJ INT C INT, P6228
NR 20
TC 0
Z9 0
U1 2
U2 2
PU FUJI TECHNOLOGY PRESS LTD
PI TOKYO
PA 4F TORANOMON SANGYO BLDG, 2-29, TORANOMON 1-CHOME, MINATO-KU, TOKYO,
105-0001, JAPAN
SN 0915-3942
EI 1883-8049
J9 J ROBOT MECHATRON
JI J. Robot. Mechatron.
PD JUN
PY 2017
VL 29
IS 3
SI SI
BP 500
EP 508
DI 10.20965/jrm.2017.p0500
PG 9
WC Robotics
SC Robotics
GA EY2WK
UT WOS:000403833200006
DA 2018-01-22
ER

PT J
AU Ryoo, YJ
Yamanoi, T
AF Ryoo, Young-Jae
Yamanoi, Takahiro
TI Guest Editorial of the Special Issue on Computational Intelligence and
Its Applications for Robotics
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Editorial Material
DE Computational intelligence; soft computing; artificial intelligence;
intelligent robotics
AB The special issue topics focus on the computational intelligence and its
application for robotics. Its areas reach out comprehensive ranges; context-
awareness software, omnidirectional walking and fuzzy controller of dynamic walking
for humanoid robots, pet robots for treatment of ASD children, fuzzy logic control,
enhanced simultaneous localization and mapping, fuzzy line tracking for mobile
robots, and so on. Computational intelligence (CI) is a method of performing like
humans. Generally computational intelligence means the ability of a computer to
learn a specific task from data or experimental results. Meanwhile robotic system
has many limits to behave like human beings. The robotic system might be too
complex for mathematical reasoning, it might contain some uncertainties during the
process, or the process might simply be stochastic in real life. Real-life problems
cannot be translated into binary code for computers to process it. Computational
intelligence might solve such problems.
C1 [Ryoo, Young-Jae] Mokpo Natl Univ, Dept Control Engn & Robot, 1666 Youngsan Ro,
Jeonnam 58554, South Korea.
[Yamanoi, Takahiro] Hokkai Gakuen Univ, Dept Biomed Engn, 4-1-40 Asahi Machi,
Sapporo, Hokkaido 0628605, Japan.
RP Ryoo, YJ (reprint author), Mokpo Natl Univ, Dept Control Engn & Robot, 1666
Youngsan Ro, Jeonnam 58554, South Korea.
EM yjryoo@mokpo.ac.kr; yamanoi@hgu.jp
CR Chen L, 2017, INT J HUM ROBOT, V14, DOI 10.1142/S0219843617500086
Cho Y, 2017, INT J HUM ROBOT, V14, DOI 10.1142/S0219843617500128
Cho YO, 2017, INT J HUM ROBOT, V14, DOI 10.1142/S0219843617500165
Hsu CC, 2017, INT J HUM ROBOT, V14, DOI 10.1142/S0219843617500074
Lee JG, 2017, INT J HUM ROBOT, V14, DOI 10.1142/S0219843617500013
Lee K, 2017, INT J HUM ROBOT, V14, DOI 10.1142/S0219843617500049
Truc NT, 2017, INT J HUM ROBOT, V14, DOI 10.1142/S0219843616500274
Luat TH, 2017, INT J HUM ROBOT, V14, DOI 10.1142/S0219843617500025
Yasuda G, 2017, INT J HUM ROBOT, V14, DOI 10.1142/S0219843617500177
Yoo HH, 2017, INT J HUM ROBOT, V14, DOI 10.1142/S0219843617500037
NR 10
TC 0
Z9 0
U1 13
U2 13
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD JUN
PY 2017
VL 14
IS 2
SI SI
AR 1702001
DI 10.1142/S0219843617020017
PG 4
WC Robotics
SC Robotics
GA EX7ID
UT WOS:000403420800002
DA 2018-01-22
ER

PT J
AU Yasuda, G
AF Yasuda, Gen'ichi
TI Distributed Controller Design for Cooperative Robot Systems Based on
Hierarchical Task Decomposition
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Robot systems; hierarchical task modeling; system coordination;
cooperative robot control; distributed control system design; Petri nets
ID DEADLOCK
AB A distributed simulation and control method for humanoid robot systems based on
the discrete event net models is proposed. Extended Petri nets are adopted as an
effective tool to describe, design and control the cooperative behavior of robots.
Based on hierarchical net decomposition, conceptual and detailed Petri net models
are assigned to the upper level and the lower level controllers, respectively. For
the lower level control, individual net models of robots are executed on separate
local controllers. The unifed net representation for the hierarchical coordination
of cooperative execution control by multiple robots is proposed. Overall control
software is implemented and executed on a general hierarchical and distributed
control architecture corresponding to the hardware structure of robot systems. The
upper level system controller and lower level local controllers are concurrently
executed, communicating events information (enabled transitions) with each other,
so that the cooperative robotic tasks are successfully performed.
C1 [Yasuda, Gen'ichi] Nagasaki Inst Appl Sci, Dept Informat, Nagasaki 8510193,
Japan.
RP Yasuda, G (reprint author), Nagasaki Inst Appl Sci, Dept Informat, Nagasaki
8510193, Japan.
EM yasuda_genichi@pilot.nias.ac.jp
CR Calderon CAA, 2007, ADVANCES IN CLIMBING AND WALKING ROBOTS, PROCEEDINGS, P487
Costelha H., 2007, IEEE RSJ INT C INT R, P1449
David R., 2005, DISCRETE CONTINUOUS
Hu HS, 2011, IEEE T SYST MAN CY A, V41, P201, DOI 10.1109/TSMCA.2010.2058101
Lim H, 2009, INT J HUM ROBOT, V6, P173, DOI 10.1142/S0219843609001747
Luo J., 2014, IEEE T SYST MAN CYB, V45, P44
Mohan S, 2004, ROBOT CIM-INT MANUF, V20, P541, DOI 10.1016/j.reim.2004.07.004
Moreira MV, 2014, IEEE T AUTOM SCI ENG, V11, P48, DOI 10.1109/TASE.2013.2281733
Murata T., 1989, IEEE, V77, P541, DOI DOI 10.1109/5.24143
Noda A., 2010, ISCIE ASME 2010 INT
Sentis L., 2005, INT J HUM ROBOT, V2, P505, DOI DOI 10.1142/S0219843605000594
Silva EM, 2014, PROC IEEE INT SYMP, P1111, DOI 10.1109/ISIE.2014.6864769
Uzam M., 2005, P 35 INT C COMP IND, P2025
Wu NQ, 2013, IEEE T SYST MAN CY-S, V43, P1182, DOI 10.1109/TSMCA.2012.2230440
Xie B., 2005, INT J HUM ROBOT, V12, P505
Xie M, 2009, INT J HUM ROBOT, V6, P307, DOI 10.1142/S0219843609001759
Yasuda G., 2012, 19 INT C MECH MACH V, P350
Yasuda G., 2010, 5 INT C ADV MECH ICA, P410
Yasuda G., 2014, ISCIE ASME 2014 INT
Yu WY, 2014, IEEE T SYST MAN CY-S, V44, P327, DOI 10.1109/TSMC.2013.2248358
NR 20
TC 1
Z9 1
U1 0
U2 0
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD JUN
PY 2017
VL 14
IS 2
SI SI
AR 1750017
DI 10.1142/S0219843617500177
PG 21
WC Robotics
SC Robotics
GA EX7ID
UT WOS:000403420800011
DA 2018-01-22
ER

PT J
AU Takano, W
Nakamura, Y
AF Takano, Wataru
Nakamura, Yoshihiko
TI Planning of goal-oriented motion from stochastic motion primitives and
optimal controlling of joint torques in whole-body
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Motion primitive; Motion synthesis; Stochastic model
ID IMITATION; MODEL; ROBOTS
AB Humanoid robots are expected to be integrated into daily life. This requires the
robots to perform human-like actions that are easily understandable by humans.
Learning by imitation is an effective framework that enables the robots to generate
the same motions that humans do. However, it is generally not useful for the robots
to generate motions that are precisely the same as learned motions because the
environment is likely to be different from the environment where the motions were
learned. The humanoid robot should synthesize motions that are adaptive to the
current environment by modifying learned motions. Previous research encoded
captured human whole-body motions into hidden Markov models, which are hereafter
referred to as motion primitives, and generated human-like motions based on the
acquired motion primitives. The contact between the body and the environment also
needs to be controlled, so that the humanoid robot's whole-body motion can be
realized in its current environment. This paper proposes a novel approach to
synthesizing kinematic data using the motion primitive and controlling the torques
of all the joints in the humanoid robot to achieve the desired whole-body motions
and contact forces. The experiments demonstrate the validity of the proposed
approach to synthesizing and controlling whole body motions by humanoid robots. (C)
2017 The Authors. Published by Elsevier B.V.
C1 [Takano, Wataru; Nakamura, Yoshihiko] Univ Tokyo, Mechano Informat, Bunkyo Ku,
7-3-1 Hongo, Tokyo 1138656, Japan.
RP Takano, W (reprint author), Univ Tokyo, Mechano Informat, Bunkyo Ku, 7-3-1
Hongo, Tokyo 1138656, Japan.
EM takano@ynl.t.u-tokyo.ac.jp; nakamura@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102
FU Japan Society for the Promotion of Science [26700021]; Strategic
Information and Communications R&D Promotion Program of the Ministry of
Internal Affairs and Communications [142103011]
FX This research was partially supported by a Grant-in-Aid for Young
Scientists (A) (No. 26700021) from the Japan Society for the Promotion
of Science and by the Strategic Information and Communications R&D
Promotion Program (No. 142103011) of the Ministry of Internal Affairs
and Communications.
CR Asfour T., 2006, P IEEE RAS INT C HUM, P40
Billard AG, 2006, ROBOT AUTON SYST, V54, P370, DOI 10.1016/j.robot.2006.01.007
Breazeal C, 2002, TRENDS COGN SCI, V6, P481, DOI 10.1016/S1364-6613(02)02016-8
Calinon S, 2013, IEEE INT C INT ROBOT, P610, DOI 10.1109/IROS.2013.6696414
Calinon S, 2010, IEEE ROBOT AUTOM MAG, V17, P44, DOI 10.1109/MRA.2010.936947
FLASH T, 1985, J NEUROSCI, V5, P1688
Gams A, 2016, ROBOT AUTON SYST, V75, P340, DOI 10.1016/j.robot.2015.09.011
Herzog D., 2008, P BRIT MACH VIS C, P163
Ijspeert A. J., 2003, NEURAL INFORM PROCES, V15, P1547
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Ito M, 2006, NEURAL NETWORKS, V19, P323, DOI 10.1016/j.neunet.2006.02.007
Kadone H, 2005, 2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-4, P2900, DOI 10.1109/IROS.2005.1545416
Kulic D, 2008, INT J ROBOT RES, V27, P761, DOI 10.1177/0278364908091153
Kunori H, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, P5240, DOI 10.1109/IROS.2009.5354022
Mataric MJ, 2000, IEEE INTELL SYST APP, V15, P18, DOI 10.1109/5254.867908
MAYEDA H, 1990, IEEE T ROBOTIC AUTOM, V6, P312, DOI 10.1109/70.56663
Okada M, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1410, DOI 10.1109/ROBOT.2002.1014741
Petric T., P IEEE INT C ROB AUT
RABINER LR, 1989, P IEEE, V77, P257, DOI 10.1109/5.18626
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Sugiura K, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P852, DOI
10.1109/IROS.2008.4651169
Takano W., 2008, P IEEE RAS INT C HUM, P708
Takano W, 2015, INT J ROBOT RES, V34, P1314, DOI 10.1177/0278364915587923
Takano W, 2015, ROBOT AUTON SYST, V66, P75, DOI 10.1016/j.robot.2014.12.008
Wang R., P IEEE RSJ INT C INT
Yoshida K., 1995, P 7 INT S ROB RES, P101
NR 26
TC 0
Z9 0
U1 1
U2 3
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD MAY
PY 2017
VL 91
BP 226
EP 233
DI 10.1016/j.robot.2017.01.013
PG 8
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA EO8OT
UT WOS:000396949800019
OA gold
DA 2018-01-22
ER

PT J
AU Yang, PC
Sasaki, K
Suzuki, K
Kase, K
Sugano, S
Ogata, T
AF Yang, Pin-Chu
Sasaki, Kazuma
Suzuki, Kanata
Kase, Kei
Sugano, Shigeki
Ogata, Tetsuya
TI Repeatable Folding Task by Humanoid Robot Worker Using Deep Learning
SO IEEE ROBOTICS AND AUTOMATION LETTERS
LA English
DT Article
DE Humanoid robots; learning and adaptive systems; motion control of
manipulators; neurorobotics
AB We propose a practical state-of-the-art method to develop a machine-learning-
based humanoid robot that can work as a production line worker. The proposed
approach provides an intuitive way to collect data and exhibits the following
characteristics: task performing capability, task reiteration ability,
generalizability, and easy applicability. The proposed approach utilizes a real-
time user interface with a monitor and provides a first-person perspective using a
head-mounted display. Through this interface, teleoperation is used for collecting
task operating data, especially for tasks that are difficult to be applied with a
conventional method. A two-phase deep learning model is also utilized in the
proposed approach. A deep convolutional autoencoder extracts images features and
reconstructs images, and a fully connected deep time delay neural network learns
the dynamics of a robot task process from the extracted image features and motion
angle signals. The "Nextage Open" humanoid robot is used as an experimental
platform to evaluate the proposed model. The object folding task utilizing with 35
trained and 5 untrained sensory motor sequences for test. Testing the trained model
with online generation demonstrates a 77.8% success rate for the object folding
task.
C1 [Yang, Pin-Chu; Sugano, Shigeki] Waseda Univ, Grade Sch Creat Sci & Engn, Dept
Modern Mech Engn, Tokyo 1698050, Japan.
[Yang, Pin-Chu; Suzuki, Kanata; Kase, Kei; Ogata, Tetsuya] Artificial
Intelligence Res Ctr, Tsukuba, Ibaraki 3058560, Japan.
[Sasaki, Kazuma; Suzuki, Kanata; Kase, Kei; Ogata, Tetsuya] Waseda Univ, Sch
Fundamental Sci & Engn, Dept Intermedia Art & Sci, Tokyo 1698050, Japan.
RP Yang, PC (reprint author), Waseda Univ, Grade Sch Creat Sci & Engn, Dept Modern
Mech Engn, Tokyo 1698050, Japan.
EM kcy.komayang@gmail.com; ssk.sasaki@suou.waseda.jp;
suzuki@idr.ias.sci.waseda.ac.jp; kase@idr.ias.sci.waseda.ac.jp;
sugano@waseda.jp; ogata@waseda.jp
FU AIST, "Fundamental Study for Intelligent Machine to Coexist with Nature"
from the Research Institute for Science and Engineering, Waseda
University; MEXT [15H01710]
FX This work was supported in part by the AIST, "Fundamental Study for
Intelligent Machine to Coexist with Nature" from the Research Institute
for Science and Engineering, Waseda University and in part by an MEXT
Grant-in-Aid for Scientific Research (A) 15H01710.
CR Awano H, 2010, 2010 IEEE International Conference on Systems, Man and
Cybernetics (SMC 2010), P2533, DOI 10.1109/ICSMC.2010.5641924
Bojarski Mariusz, 2016, arXiv
Dong C, 2016, IEEE T PATTERN ANAL, V38, P295, DOI 10.1109/TPAMI.2015.2439281
Kingma D., 2014, INT C LEARN REPR, P1
Koishihara Y., 2016, P 2016 JSME C ROB ME
Krizhevsky A, 2012, ADV NEURAL INFORM PR, V25, P1097
LANG KJ, 1990, NEURAL NETWORKS, V3, P23, DOI 10.1016/0893-6080(90)90044-L
Lenz I, 2015, INT J ROBOT RES, V34, P705, DOI 10.1177/0278364914549607
Levine S., 2016, INT S EXP ROB
Levine S, 2016, J MACH LEARN RES, V17
Miller S, 2012, INT J ROBOT RES, V31, P249, DOI 10.1177/0278364911430417
Mukai K, 2015, 2015 24TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION (RO-MAN), P363, DOI 10.1109/ROMAN.2015.7333624
Noda K, 2014, ROBOT AUTON SYST, V62, P721, DOI 10.1016/j.robot.2014.03.003
Nof S. Y., 1999, HDB IND ROBOTICS
Pinto L, 2016, IEEE INT CONF ROBOT, P3406, DOI 10.1109/ICRA.2016.7487517
Sindler J., 2013, THESIS
Suzuki K., 2016, 78 NAT CONV INF PROC
Yamamoto Y, 2012, INT SYM MICRO-NANOM, P290, DOI 10.1109/MHS.2012.6492424
NR 18
TC 1
Z9 1
U1 2
U2 8
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 2377-3766
J9 IEEE ROBOT AUTOM LET
JI IEEE Robot. Autom. Lett.
PD APR
PY 2017
VL 2
IS 2
BP 397
EP 403
DI 10.1109/LRA.2016.2633383
PG 7
WC Robotics
SC Robotics
GA FK8DB
UT WOS:000413736600004
DA 2018-01-22
ER

PT J
AU Hyon, SH
Suewaka, D
Torii, Y
Oku, N
AF Hyon, Sang-Ho
Suewaka, Daisuke
Torii, Yuki
Oku, Narifumi
TI Design and Experimental Evaluation of a Fast Torque-Controlled Hydraulic
Humanoid Robot
SO IEEE-ASME TRANSACTIONS ON MECHATRONICS
LA English
DT Article
DE Biped balance; compliance; force control; humanoid platform; passivity;
servovalve; squat
ID LOCOMOTION; WALKING
AB This paper reports the design and control of a fast torque-controlled hydraulic
humanoid robot, TaeMu. The robot has 15 active joints that are all driven by
hydraulic servocylinders with an external hydraulic power supply. The 1377-mm tall
robot has a three-axis active torso used for static and dynamic balancing. It
weighs 72.3 kg including a dummy weight stand for the arms. Its lightweight design
with carbon-fiber-reinforced plastic allows the legs to have a similar mass
distribution to that of human legs. We present the details of the hardware design
including the hydraulic actuator selection, mechanism, and control systems, as well
as the passivity-based controller design for joint torque control and whole-body
motion control. We present experimentally obtained results, such as speed testing,
basic torque control testing, full-body compliant balancing with attitude
regulation/tracking, and balanced full-squat motions. The experimentally obtained
results and estimated joint specification imply the high potential of the robot to
perform human-like motions if the researchers can invent proper control algorithms.
C1 [Hyon, Sang-Ho; Suewaka, Daisuke; Torii, Yuki; Oku, Narifumi] Ritsumeikan Univ,
Humanoid Syst Lab, Kusatsu 5258577, Japan.
RP Hyon, SH (reprint author), Ritsumeikan Univ, Humanoid Syst Lab, Kusatsu 5258577,
Japan.
EM sangho@ieee.org; suewaka.daisuke@subaru-fhi.co.jp;
toriiyu@yamaha-motor.co.jp; oku.narifumi@canon-soft.co.jp
FU Ministry of Education, Culture, Sports, Science and Technology
[26280098]
FX This work was supported by the Ministry of Education, Culture, Sports,
Science and Technology Grant-in-Aid for Scientific Research (B)
26280098.
CR Alleyne A, 2000, CONTROL ENG PRACT, V8, P1347, DOI 10.1016/S0967-0661(00)00081-2
Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Englsberger J., 2014, P IEEE RAS INT C HUM, P916
Feng SY, 2015, IEEE-RAS INT C HUMAN, P1028, DOI 10.1109/HUMANOIDS.2015.7363480
Gomi H, 1997, BIOL CYBERN, V76, P163, DOI 10.1007/s004220050329
Herzog A, 2014, IEEE INT C INT ROBOT, P981, DOI 10.1109/IROS.2014.6942678
Human Characteristic Database, HUM CHAR DAT
Hyon S., 2015, EXPT FULL SIZED TORQ
Hyon SH, 2007, IEEE T ROBOT, V23, P884, DOI 10.1109/TRO.2007.904896
Hyon SH, 2015, IEEE-RAS INT C HUMAN, P576, DOI 10.1109/HUMANOIDS.2015.7363420
Hyon SH, 2013, IEEE INT C INT ROBOT, P4655, DOI 10.1109/IROS.2013.6697026
Hyon SH, 2010, IEEE INT CONF ROBOT, P1084, DOI 10.1109/ROBOT.2010.5509654
Hyon SH, 2009, IEEE-ASME T MECH, V14, P677, DOI 10.1109/TMECH.2009.2033117
Hyon SH, 2003, P I MECH ENG I-J SYS, V217, P83, DOI 10.1243/095965103321512800
Izawa K, 2016, J ROBOT MECHATRON, V28, P95, DOI 10.20965/jrm.2016.p0095
Johnson M, 2015, J FIELD ROBOT, V32, P192, DOI 10.1002/rob.21571
Merrit H., 1967, NONLINEAR SYSTEMS
Merrit H. E., 1967, HYDRAULIC CONTROL SY
Nelson G., 2012, J ROBOTICS SOC JAPAN, V30, P372
Neuman D, 2002, KINESIOLOGY MUSCULOS
Ott C., 2010, P IEEE RAS INT C HUM, P167
Ott C., 2011, P IEEE RAS INT C HUM, P26
Pratt J, 2001, INT J ROBOT RES, V20, P129, DOI 10.1177/02783640122067309
Semini C, 2011, P I MECH ENG I-J SYS, V225, P831, DOI 10.1177/0959651811402275
Winter DA, 2009, BIOMECHANICS MOTOR C
Yamaguchi J, 1997, IEEE INT CONF ROBOT, P185, DOI 10.1109/ROBOT.1997.620036
Yao B., 1999, P 1999 AM CONTR C IE, V2, P759
NR 28
TC 0
Z9 0
U1 12
U2 12
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1083-4435
EI 1941-014X
J9 IEEE-ASME T MECH
JI IEEE-ASME Trans. Mechatron.
PD APR
PY 2017
VL 22
IS 2
BP 623
EP 634
DI 10.1109/TMECH.2016.2628870
PG 12
WC Automation & Control Systems; Engineering, Manufacturing; Engineering,
Electrical & Electronic; Engineering, Mechanical
SC Automation & Control Systems; Engineering
GA ES7IM
UT WOS:000399723400005
DA 2018-01-22
ER

PT J
AU Claret, JA
Venture, G
Basanez, L
AF Claret, Josep-Arnau
Venture, Gentiane
Basanez, Luis
TI Exploiting the Robot Kinematic Redundancy for Emotion Conveyance to
Humans as a Lower Priority Task
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Human-robot interaction; Social robotics; Emotion conveyance; Robot
kinematics; Task priority; Pepper robot
ID BODY MOVEMENTS; SPEECH; MOTION; RECOGNITION; RESOLUTION; EXPRESSION;
GESTURES; ROBUST; GAZE
AB Current approaches do not allow robots to execute a task and simultaneously
convey emotions to users using their body motions. This paper explores the
capabilities of the Jacobian null space of a humanoid robot to convey emotions. A
task priority formulation has been implemented in a Pepper robot which allows the
specification of a primary task (waving gesture, transportation of an object, etc.)
and exploits the kinematic redundancy of the robot to convey emotions to humans as
a lower priority task. The emotions, defined by Mehrabian as points in the
pleasure-arousal-dominance space, generate intermediate motion features (jerkiness,
activity and gaze) that carry the emotional information. A map from this features
to the joints of the robot is presented. A user study has been conducted in which
emotional motions have been shown to 30 participants. The results show that
happiness and sadness are very well conveyed to the user, calm is moderately well
conveyed, and fear is not well conveyed. An analysis on the dependencies between
the motion features and the emotions perceived by the participants shows that
activity correlates positively with arousal, jerkiness is not perceived by the
user, and gaze conveys dominance when activity is low. The results indicate a
strong influence of the most energetic motions of the emotional task and point out
new directions for further research. Overall, the results show that the null space
approach can be regarded as a promising mean to convey emotions as a lower priority
task.
C1 [Claret, Josep-Arnau; Basanez, Luis] Univ Politecn Catalunya BarcelonaTech UPC,
Inst Ind & Control Engn, Barcelona, Catalonia, Spain.
[Venture, Gentiane] Tokyo Univ Agr & Technol, Tokyo, Japan.
RP Claret, JA (reprint author), Univ Politecn Catalunya BarcelonaTech UPC, Inst Ind
& Control Engn, Barcelona, Catalonia, Spain.
EM josep.arnau.claret@upc.edu; venture@cc.tuat.ac.jp; luis.basanez@upc.edu
RI Venture, Gentiane/E-7060-2013
OI Venture, Gentiane/0000-0001-7767-4765
FU Spanish MINECO Projects [DPI2011-22471, DPI2013-40882-P,
DPI2014-57757-R]; Spanish predoctoral Grant [BES-2012-054899]; Japanese
challenging exploratory research Grant [15K12124]
FX This work has been partially supported by the Spanish MINECO Projects
DPI2011-22471, DPI2013-40882-P and DPI2014-57757-R, the Spanish
predoctoral Grant BES-2012-054899, and the Japanese challenging
exploratory research Grant 15K12124. The authors would like to thank the
members of the GVLab for their invaluable support with the translations
and attending the local participants during the user study.
CR Adams RB, 2005, EMOTION, V5, P3, DOI 10.1037/1528-3542.5.1.3
Amaya K., 1996, GRAPHICS INTERFACE, V96, P222
Asada M, 2015, INT J SOC ROBOT, V7, P19, DOI 10.1007/s12369-014-0253-z
Baddoura R, 2015, INT J SOC ROBOT, V7, P489, DOI 10.1007/s12369-015-0279-x
Baerlocher P., 1998, INT ROB SYST 1998 P, V1, P323, DOI DOI
10.1109/IR0S.1998.724639
Beck A., 2010, P 3 INT WORKSH AFF I, P37
Bernhardt D, 2007, P 2 INT C AFF COMP I, P59
Berns K, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-12, P3119, DOI 10.1109/IROS.2006.282331
BRADLEY MM, 1994, J BEHAV THER EXP PSY, V25, P49, DOI 10.1016/0005-
7916(94)90063-9
Breazeal C, 2003, INT J HUM-COMPUT ST, V59, P119, DOI 10.1016/S1071-
5819(03)00018-1
Breazeal C, 2005, WHO NEEDS EMOTIONS B
Busso C, 2007, IEEE T AUDIO SPEECH, V15, P1075, DOI 10.1109/TASL.2006.885910
Carney DR, 2005, J NONVERBAL BEHAV, V29, P105, DOI 10.1007/s10919-005-2743-z
Chiaverini S, 1997, IEEE T ROBOTIC AUTOM, V13, P398, DOI 10.1109/70.585902
Chiaverini S, 2007, SPRINGER HDB ROBOTIC
Crumpton J, 2016, INT J SOC ROBOT, V8, P271, DOI 10.1007/s12369-015-0329-4
De Schutter J, 2007, INT J ROBOT RES, V26, P433, DOI
[10.1177/027836490707809107, 10.1177/0278364907078091]
Derakshan N, 2009, EUR PSYCHOL, V14, P168, DOI 10.1027/1016-9040.14.2.168
Di Salvo C. F., 2002, P 4 C DES INT SYST P, P321
Gebhard P., 2005, P 4 INT JOINT C AUT, P29
Glowinski D, 2011, IEEE T AFFECT COMPUT, V2, P106, DOI 10.1109/T-AFFC.2011.7
Hudson J, 2017, INT J SOC ROBOT, V9, P199, DOI 10.1007/s12369-016-0384-5
Johnston O., 1981, ILLUSION LIFE DISNEY
Karg M, 2013, IEEE T AFFECT COMPUT, V4, P341, DOI 10.1109/T-AFFC.2013.29
KLEINKE CL, 1986, PSYCHOL BULL, V100, P78, DOI 10.1037//0033-2909.100.1.78
Kulic D, 2007, ROBOTICA, V25, P13, DOI 10.1017/S0263574706002955
Lance BJ, 2008, P 7 INT JOINT C AUT, V1, P199
Lang P. J., 2008, A8 U FLOR CTR RES PS
LIEGEOIS A, 1977, IEEE T SYST MAN CYB, V7, P868
Lim Angelica, 2011, 2011 11th IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2011), P472, DOI 10.1109/Humanoids.2011.6100891
Mansard N., 2009, INT C ADV ROB, P1
Mehrabian A, 1996, CURR PSYCHOL, V14, P261, DOI 10.1007/BF02686918
Montepare J, 1999, J NONVERBAL BEHAV, V23, P133, DOI 10.1023/A:1021435526134
Nakagawa K, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P5003, DOI 10.1109/IROS.2009.5354205
Nomura T, 2010, INT J SOC ROBOT, V2, P147, DOI 10.1007/s12369-010-0050-2
Oswald A, 2009, 4645 IZA
Palanica A, 2012, J NONVERBAL BEHAV, V36, P123, DOI 10.1007/s10919-011-0128-z
Pierre-Yves O, 2003, INT J HUM-COMPUT ST, V59, P157, DOI 10.1016/S1071-
5819(03)00141-6
Saerbeck M, 2010, 2010 5 ACM IEEE INT, P53, DOI [10.1109/HRI.2010.5453269, DOI
10.1109/HRI.2010.5453269]
Sentis L., 2005, INT J HUM ROBOT, V2, P505, DOI DOI 10.1142/S0219843605000594
Siciliano B, 1991, P IEEE INT C ADV ROB, V2, P1211, DOI DOI
10.1109/ICAR.1991.240390
Tang D, 2015, J NONVERBAL BEHAV, V39, P181, DOI 10.1007/s10919-015-0206-8
Tapus A, 2007, ROBOTICS AUTOMATION, V14, P35, DOI DOI 10.1109/MRA.2007.339605
Tapus A, 2007, P AAAI SPRING S MULT
Unuma Munetoshi, 1995, P SIGGRAPH 95, P91, DOI DOI 10.1145/218380.218419
White GD, 2009, IEEE-ASME T MECH, V14, P349, DOI 10.1109/TMECH.2008.2008802
Zheng M, 2015, INT J SOC ROBOT, V7, P783, DOI 10.1007/s12369-015-0305-z
NR 47
TC 0
Z9 0
U1 5
U2 5
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD APR
PY 2017
VL 9
IS 2
BP 277
EP 292
DI 10.1007/s12369-016-0387-2
PG 16
WC Robotics
SC Robotics
GA ET2UO
UT WOS:000400129800009
DA 2018-01-22
ER

PT J
AU Nishita, S
Onoe, H
AF Nishita, Satoshi
Onoe, Hiroaki
TI Liquid-filled Flexible Micro Suction-Controller Array for Enhanced
Robotic Object Manipulation
SO JOURNAL OF MICROELECTROMECHANICAL SYSTEMS
LA English
DT Article
DE Flexible; microfluidics; polydimethylsiloxane (PDMS); robotic
manipulation; suction devices
AB With the intent to enhance robotic manipulation, this paper describes a novel
liquid-filled flexible micro suction-controller array (MISCA) for humanoid robotic
hands that can hold curved or grooved surface objects. The proposed MISCA comprises
49 suction units arrayed in a 10 mm x 10 mm area of flexible polydimethylsiloxane
sheet. Each 1 mm diameter suction unit generates suction force independently. An
incompressible working fluid (ethylene glycol) is injected or drained through
microchannels in the MISCA using a syringe pump to control the suction force. In
experiments, the proposed MISCA effectively generated suction forces of 0.96-1.54 N
on flat surfaces, 0.43-0.63 N on cylindrical surfaces, and 0.60-0.83 N on spherical
surfaces. In addition, the proposed MISCA was demonstrated to successfully
manipulate a 75 g flat object (a watch), a 1-g grooved object (a yen coin), and a
curved object (a tablet) using suction control.
C1 [Nishita, Satoshi; Onoe, Hiroaki] Keio Univ, Fac Sci & Technol, Dept Mech Engn,
Yokohama, Kanagawa 2238522, Japan.
RP Onoe, H (reprint author), Keio Univ, Fac Sci & Technol, Dept Mech Engn,
Yokohama, Kanagawa 2238522, Japan.
EM onoe@mech.keio.ac.jp
CR Choi KM, 2003, J AM CHEM SOC, V125, P4060, DOI 10.1021/ja029973k
Cutkosky M. R., 1987, P 1987 IEEE INT C RO, DOI [10.1109/ROBOT.1987.1087913, DOI
10.1109/ROBOT.1987.1087913]
Eddings MA, 2008, J MICROMECH MICROENG, V18, DOI 10.1088/0960-1317/18/6/067001
HARMO P, 2005, P IEEE RSJ INT C INT, P2721
Horie T., 2007, P 20 IEEE INT C MICR, P691
Iwata H., 2009, P IEEE INT C ROB AUT, P580
Johnston ID, 2014, J MICROMECH MICROENG, V24, DOI 10.1088/0960-1317/24/3/035017
Kessens CC, 2011, J MECH ROBOT, V3, DOI 10.1115/1.4004893
Lee JN, 2003, ANAL CHEM, V75, P6544, DOI 10.1021/ac0346712
Manabe Ryoichi, 2013, J ROBOTICS SOC JAPAN, V31, P98
Nakao M., 1995, T JPN SOC MECH ENG-C, V61, P1021
Pritchard RH, 2013, SOFT MATTER, V9, P6037, DOI 10.1039/c3sm50901j
Robinson H, 2014, INT J SOC ROBOT, V6, P575, DOI 10.1007/s12369-014-0242-2
Satyanarayana S, 2005, J MICROELECTROMECH S, V14, P392, DOI
10.1109/JMEMS.2004.839334
Sawano S, 2008, PROC IEEE MICR ELECT, P419
SHIMOGA KB, 1992, 1992 IEEE INTERNATIONAL CONF ON ROBOTICS AND AUTOMATION :
PROCEEDINGS, VOLS 1-3, P1300, DOI 10.1109/ROBOT.1992.220069
Stuckler J., 2009, P 9 IEEE RAS INT C H, P506
Takahashi T, 2013, IEEE INT CONF ROBOT, P364, DOI 10.1109/ICRA.2013.6630601
Thanh-Vinh N, 2011, PROC IEEE MICR ELECT, P284, DOI 10.1109/MEMSYS.2011.5734417
Timoshenko S, 1964, THEORY PLATES SHELLS
Wygant IO, 2008, ULTRASON, P2111, DOI 10.1109/ULTSYM.2008.0522
Zesch W, 1997, IEEE INT CONF ROBOT, P1761, DOI 10.1109/ROBOT.1997.614405
NR 22
TC 0
Z9 0
U1 6
U2 7
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1057-7157
EI 1941-0158
J9 J MICROELECTROMECH S
JI J. Microelectromech. Syst.
PD APR
PY 2017
VL 26
IS 2
BP 366
EP 375
DI 10.1109/JMEMS.2017.2651882
PG 10
WC Engineering, Electrical & Electronic; Nanoscience & Nanotechnology;
Instruments & Instrumentation; Physics, Applied
SC Engineering; Science & Technology - Other Topics; Instruments &
Instrumentation; Physics
GA ES2CQ
UT WOS:000399333800009
DA 2018-01-22
ER

PT J
AU Zhang, TW
Caron, S
Nakamura, Y
AF Zhang, Tianwei
Caron, Stephane
Nakamura, Yoshihiko
TI Supervoxel Plane Segmentation and Multi-Contact Motion Generation for
Humanoid Stair Climbing
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Plane segmentation; humanoid stair climbing; static stability
ID PERCEPTION
AB Stair climbing is still a challenging task for humanoid robots, especially in
unknown environments. In this paper, we address this problem from perception to
execution. Our first contribution is a real-time plane-segment estimation method
using Lidar data without prior models of the staircase. We then integrate this
solution with humanoid motion planning. Our second contribution is a stair-climbing
motion generator where estimated plane segments are used to compute footholds and
stability polygons. We evaluate our method on various staircases. We also
demonstrate the feasibility of the generated trajectories in a real-life experiment
with the humanoid robot HRP-4.
C1 [Zhang, Tianwei; Nakamura, Yoshihiko] Univ Tokyo, Sch Informat Sci & Technol,
Dept Mechnoinformat, Bunkyo Ku, 7-3-1 Hongo, Tokyo, Japan.
[Caron, Stephane] CNRS, LIRMM, UM2, 161 Rue Ada, Montpellier, France.
RP Zhang, TW (reprint author), Univ Tokyo, Sch Informat Sci & Technol, Dept
Mechnoinformat, Bunkyo Ku, 7-3-1 Hongo, Tokyo, Japan.
EM zhang@ynl.t.u-tokyo.ac.jp; stephane.caron@normalesup.org;
nakamura@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102
FU NEDO International RAMP;D and Demonstrative Project in Environmental and
Medical Fields/International RAMP;D and Demonstrative Project in
Robotics Fields/"Research and Development of Robot Open Platform for
Disaster Response"
FX This research was supported by 2014-2015 NEDO International R&D and
Demonstrative Project in Environmental and Medical Fields/International
R&D and Demonstrative Project in Robotics Fields/"Research and
Development of Robot Open Platform for Disaster Response" (PI: Yoshihiko
Nakamura).
CR Bretl T, 2008, IEEE T ROBOT, V24, P794, DOI 10.1109/TRO.2008.2001360
Caron S, 2015, IEEE INT CONF ROBOT, P5107, DOI 10.1109/ICRA.2015.7139910
Fukuda K., 1996, Combinatorics and Computer Science. 8th Franco-Japanese and 4th
Franco-Chinese Conference. Selected Papers, P91
Gutmann JS, 2008, INT J ROBOT RES, V27, P1117, DOI 10.1177/0278364908096316
Gutmann JS, 2004, PROCEEDINGS OF 2004, V2, P1407
Holzer S, 2012, IEEE INT C INT ROBOT, P2684, DOI 10.1109/IROS.2012.6385999
Ioannou Y, 2012, SECOND JOINT 3DIM/3DPVT CONFERENCE: 3D IMAGING, MODELING,
PROCESSING, VISUALIZATION & TRANSMISSION (3DIMPVT 2012), P501, DOI
10.1109/3DIMPVT.2012.12
Marton ZC, 2009, IEEE INT C ROB AUT, P3218
Osswald S., 2011, IEEE RSJ INT C INT R, P4844
Osswald S., 2011, IEEE RAS INT C HUM R, P93
Osswald S, 2012, IEEE INT C INT ROBOT, P1809, DOI 10.1109/IROS.2012.6385657
Papon J, 2013, PROC CVPR IEEE, P2027, DOI 10.1109/CVPR.2013.264
Rusu RB, 2010, KUNSTL INTELL, V24, P345, DOI 10.1007/s13218-010-0059-6
Rusu RB, 2009, J FIELD ROBOT, V26, P841, DOI 10.1002/rob.20313
Santacruz C, 2013, IEEE INT CONF ROBOT, P3110, DOI 10.1109/ICRA.2013.6631009
Sardain P., 2004, SYSTEMS MAN CYBERN A, V34, P630
Steinbrucker F, 2013, IEEE I CONF COMP VIS, P3264, DOI 10.1109/ICCV.2013.405
Sugihara T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1404, DOI 10.1109/ROBOT.2002.1014740
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Vukobratovic M, 2006, INT J HUM ROBOT, V3, P153, DOI 10.1142/S0219843606000710
Wieber PB, 2006, LECT NOTES CONTR INF, V340, P411
NR 22
TC 0
Z9 0
U1 3
U2 3
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2017
VL 14
IS 1
AR UNSP 1650022
DI 10.1142/S0219843616500225
PG 18
WC Robotics
SC Robotics
GA EM2SU
UT WOS:000395166700002
DA 2018-01-22
ER

PT J
AU Nishiguchi, S
Ogawa, K
Yoshikawa, Y
Chikaraishi, T
Hirata, O
Ishiguro, H
AF Nishiguchi, Shogo
Ogawa, Kohei
Yoshikawa, Yuichiro
Chikaraishi, Takenobu
Hirata, Oriza
Ishiguro, Hiroshi
TI Theatrical approach: Designing human-like behaviour in humanoid robots
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Robot theatre; Robot art and performance; Human-like behaviour; Humanoid
robots; Robot interface
ID ANTHROPOMORPHISM
AB A key challenge in designing humanoid robots is producing natural and human-like
behaviour. This study addressed this challenge by developing an instructive
interface to design natural and human-like behaviour in a humanoid robot that will
be helpful not only for professional robot motion designers but also for
nonprofessionals. A staging method called contemporary colloquial theatre theory
has been developed in the field of theatre to reproduce the natural behaviour of
humans. Based on the theory, to design robot behaviour, we extracted implicit
knowledge from a director's instructions given to a robot actor in a robot theatre
project, where the robot functions as an actor and interacts with human actors.
This paper shows the validity of applying this knowledge extraction method by means
of a public stage play entitled Night On The Milky Way Train and two short plays
created for the purpose of carrying out an analysis. Our analysis produced
important rules for designing the behaviour of humanoid robots. The rules were
extracted in forms suited to the instructive function of the interface. In this
paper, we report the results of a subjective experiment conducted to verify the
effectiveness of this function. Our experimental results suggest that the rules
derived are effective in improving robot behaviour. (C) 2016 Elsevier B.V. All
rights reserved.
C1 [Nishiguchi, Shogo; Ogawa, Kohei; Yoshikawa, Yuichiro; Ishiguro, Hiroshi] Osaka
Univ, 1-16 Machikaneyama, Toyonaka, Osaka, Japan.
[Chikaraishi, Takenobu; Hirata, Oriza] Tokyo Univ Arts, 12-8 Ueno Pk,Taito Ward,
Tokyo, Japan.
EM nishiguchi.shogo@irl.sys.es.osaka-u.ac.jp;
ogawa@irl.sys.es.osaka-u.ac.jp; yoshikawa@irl.sys.es.osaka-u.ac.jp;
chikaraishi.takenobu@pc.geidai.ac.jp; hhd00607@nifty.com;
ishiguro@irl.sys.es.osaka-u.ac.jp
CR Bartneck C, 2009, INT J SOC ROBOT, V1, P71, DOI 10.1007/s12369-008-0001-3
Demers L., 2010, ACM IEEE HRI WORKSH
Ding M., 2013, SENSORS 2013 IEEE, P1, DOI DOI 10.1109/ICSENS.2013.6688431
Duncan BA, 2010, ACMIEEE INT CONF HUM, P91, DOI 10.1109/HRI.2010.5453254
Goan M., 2007, COGN SCI RHETORIC, V14, P509
Harada K, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-12, P833, DOI 10.1109/IROS.2006.281733
Ishiguro H., 2011, ROBOTICS SOC JAPAN, V29, P35
Kanda T, 2010, IEEE T ROBOT, V26, P897, DOI 10.1109/TRO.2010.2062550
KENDON A, 1967, ACTA PSYCHOL, V26, P22, DOI 10.1016/0001-6918(67)90005-4
Lemaignan S., 2012, P 7 ANN ACM IEEE INT, P427
Namera K, 2008, 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, VOLS 1 AND 2, P119, DOI 10.1109/ROMAN.2008.4600653
Ogawa K., 2014, 23 IEEE INT S ROB HU
Yamamoto M., 2006, ROB HUM INT COMM ROM, P629
Yousuf Mohammad Abu, 2012, Intelligent Computing Technology. Proceedings 8th
International Conference, ICIC 2012, P423, DOI 10.1007/978-3-642-31588-6_55
Zhang T, 2008, IEEE INT CON AUTO SC, P674, DOI 10.1109/COASE.2008.4626532
NR 15
TC 0
Z9 0
U1 1
U2 8
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD MAR
PY 2017
VL 89
BP 158
EP 166
DI 10.1016/j.robot.2016.11.017
PG 9
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA EI7KJ
UT WOS:000392676300014
DA 2018-01-22
ER

PT J
AU Takeda, R
Komatani, K
AF Takeda, Ryu
Komatani, Kazunori
TI Noise-Robust MUSIC-Based Sound Source Localization Using Steering Vector
Transformation for Small Humanoids
SO JOURNAL OF ROBOTICS AND MECHATRONICS
LA English
DT Article
DE sound source localization; MUSIC; matrix decomposition; microphone
array; robot
AB We focus on the problem of localizing soft/weak voices recorded by small
humanoid robots, such as NAO. Sound source localization (SSL) for such robots
requires fast processing and noise robustness owing to the restricted resources and
the internal noise close to the microphones. Multiple signal classification using
generalized eigenvalue decomposition (GEVD-MUSIC) is a promising method for SSL. It
achieves noise robustness by whitening robot internal noise using prior noise
information. However, whitening increases the computational cost and creates a
direction-dependent bias in the localization score, which degrades the localization
accuracy. We have thus developed a new implementation of GEVD-MUSIC based on
steering vector transformation (TSV-MUSIC). The application of a transformation
equivalent to whitening to steering vectors in advance reduces the real-time
computational cost of TSV-MUSIC. Moreover, normalization of the transformed vectors
cancels the direction-dependent bias and improves the localization accuracy.
Experiments using simulated data showed that TSV-MUSIC had the highest accuracy of
the methods tested. An experiment using real recoded data showed that TSV-MUSIC
outperformed GEVD-MUSIC and other MUSIC methods in terms of localization by about 4
points under low signal-to-noise-ratio conditions.
C1 [Takeda, Ryu; Komatani, Kazunori] Osaka Univ, Inst Sci & Ind Res, 8-1 Mihogaoka,
Ibaraki, Osaka 5670047, Japan.
RP Takeda, R (reprint author), Osaka Univ, Inst Sci & Ind Res, 8-1 Mihogaoka,
Ibaraki, Osaka 5670047, Japan.
EM rtakeda@sanken.osaka-u.ac.jp; komatani@sanken.osaka-u.ac.jp
FU JSPS KAKENHI Grant [JP15K16051]
FX This work was supported by JSPS KAKENHI Grant Number JP15K16051.
CR Argentieri S, 2015, COMPUT SPEECH LANG, V34, P87, DOI 10.1016/j.csl.2015.03.003
Argentieri S., 2007, P IEEE RSJ INT C INT, P2009
Asano F, 2013, IEEE T AUDIO SPEECH, V21, P1953, DOI 10.1109/TASL.2013.2263140
Badali A, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, P2033, DOI 10.1109/IROS.2009.5354308
CAPON J, 1969, P IEEE, V57, P1408, DOI 10.1109/PROC.1969.7278
KNAPP CH, 1976, IEEE T ACOUST SPEECH, V24, P320, DOI 10.1109/TASSP.1976.1162830
KRIM H, 1996, SIGNAL PROCESSING MA, V13, P67, DOI DOI 10.1109/79.526899
Miyazaki T., 2010, J ADV COMPUTATIONAL, V14, P135
Nakadai K, 2000, SEVENTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE
(AAAI-2001) / TWELFTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
(IAAI-2000), P832
Nakadai K., 2002, J ROBOTICS MECHATRON, V14, P479
Nakadai K., 2008, P IEEE RAS INT C HUM, P561
Nakamura K, 2012, IEEE INT C INT ROBOT, P694, DOI 10.1109/IROS.2012.6385494
Nakamura K, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P664, DOI 10.1109/IROS.2009.5354419
Nakatani T, 2008, INT CONF ACOUST SPEE, P85, DOI 10.1109/ICASSP.2008.4517552
Ohata T, 2014, IEEE INT C INT ROBOT, P1902, DOI 10.1109/IROS.2014.6942813
ROY R, 1989, IEEE T ACOUST SPEECH, V37, P984, DOI 10.1109/29.32276
SCHMIDT RO, 1986, IEEE T ANTENN PROPAG, V34, P276, DOI 10.1109/TAP.1986.1143830
Taghizadeh MJ, 2015, INT CONF ACOUST SPEE, P2579, DOI
10.1109/ICASSP.2015.7178437
Takeda R., 2007, P IEEE RSJ INT C INT
Takeda R, 2015, IEEE-RAS INT C HUMAN, P859, DOI 10.1109/HUMANOIDS.2015.7363462
NR 20
TC 1
Z9 1
U1 1
U2 1
PU FUJI TECHNOLOGY PRESS LTD
PI TOKYO
PA 4F TORANOMON SANGYO BLDG, 2-29, TORANOMON 1-CHOME, MINATO-KU, TOKYO,
105-0001, JAPAN
SN 0915-3942
EI 1883-8049
J9 J ROBOT MECHATRON
JI J. Robot. Mechatron.
PD FEB
PY 2017
VL 29
IS 1
BP 26
EP 36
DI 10.20965/jrm.2017.p0026

PG 11
WC Robotics
SC Robotics
GA EM0FG
UT WOS:000394993600004
DA 2018-01-22
ER

PT J
AU Nakadai, K
Tezuka, T
Yoshida, T
AF Nakadai, Kazuhiro
Tezuka, Taiki
Yoshida, Takami
TI Ego-Noise Suppression for Robots Based on Semi-Blind Infinite
Non-Negative Matrix Factorization
SO JOURNAL OF ROBOTICS AND MECHATRONICS
LA English
DT Article
DE robot audition; ego-noise suppression; non-parametric Bayesian
ID SPEECH RECOGNITION
AB This paper addresses ego-motion noise suppression for a robot. Many ego-motion
noise suppression methods use motion information such as position, velocity, and
the acceleration of each joint to infer ego-motion noise. However, such inferences
are not reliable, since motion information and ego-motion noise are not always
correlated. We propose a new framework for ego-motion noise suppression based on
single channel processing using only acoustic signals captured with a microphone.
In the proposed framework, ego-motion noise features and their numbers are
automatically estimated in advance from an ego-motion noise input using Infinite
Non-negative Matrix Factorization (INMF), which is a non-parametric Bayesian model
that does not use explicit motion information. After that, the proposed Semi-Blind
INMF (SB-INMF) is applied to an input signal that consists of both the target and
ego-motion noise signals. Ego-motion noise features, which are obtained with INMF,
are used as inputs to the SB-INMF, and are treated as the fixed features for
extracting the target signal. Finally, the target signal is extracted with SB-INMF
using these newly-estimated features. The proposed framework was applied to ego-
motion noise suppression on two types of humanoid robots. Experimental results
showed that ego-motion noise was effectively and efficiently suppressed in terms of
both signal-to-noise ratio and performance of automatic speech recognition compared
to a conventional template-based ego-motion noise suppression method using motion
information. Thus, the proposed method worked properly on a robot without a motion
information interface.
C1 [Nakadai, Kazuhiro; Tezuka, Taiki; Yoshida, Takami] Tokyo Inst Technol, Grad Sch
Informat Sci & Engn, Meguro Ku, 2-12-1 Ookayama, Tokyo 1528552, Japan.
[Nakadai, Kazuhiro] Honda Res Inst Japan Co Ltd, 8-1 Honcho, Wako, Saitama
3510188, Japan.
RP Nakadai, K (reprint author), Tokyo Inst Technol, Grad Sch Informat Sci & Engn,
Meguro Ku, 2-12-1 Ookayama, Tokyo 1528552, Japan.; Nakadai, K (reprint author),
Honda Res Inst Japan Co Ltd, 8-1 Honcho, Wako, Saitama 3510188, Japan.
EM nakadai@jp.honda-ri.com
FU ImPACT Tough Robotics Challenge; [24118702]; [24220006]; [16H02884];
[16K00294]
FX This research was partially supported by Grant-in-Aid for Scientific
Research No. 24118702, 24220006, 16H02884, 16K00294, and ImPACT Tough
Robotics Challenge.
CR Abdallah S. A., 2004, P 5 INT C MUS INF RE, P10
BOLL SF, 1979, IEEE T ACOUST SPEECH, V27, P113, DOI 10.1109/TASSP.1979.1163209
DELEFORGE A, 2015, INT CONF ACOUST SPEE, P355
Even J, 2012, IEEE INT C INT ROBOT, P986, DOI 10.1109/IROS.2012.6386046
Even J, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, P658, DOI 10.1109/IROS.2009.5354451
Griffiths TL, 2011, J MACH LEARN RES, V12, P1185
Blei David M, 2010, P 27 INT C MACH LEAR, P439
Hornstein J., 2006, P IEEE RSJ INT C INT, P1171
Ince G., 2011, P IEEE INT C ROB AUT, P3517
Ince G, 2011, ADV ROBOTICS, V25, P1405, DOI 10.1163/016918611X579448
Ito A., 2005, P EUR C SPEECH COMM, P2685
Nakadai K, 2004, SPEECH COMMUN, V44, P97, DOI 10.1016/j.specom.2004.10.010
Nakadai K, 2000, SEVENTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE
(AAAI-2001) / TWELFTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
(IAAI-2000), P832
Nakanishi K, 2005, 11th International Conference on Parallel and Distributed
Systems Workshops, Vol II, Proceedings,, P196
Nishimura Y, 2006, IEEE-RAS INT C HUMAN, P26, DOI 10.1109/ICHR.2006.321359
Oliveira J. L., 2015, I J HUMANOID ROBOTIC, V12
Parra LC, 2002, IEEE T SPEECH AUDI P, V10, P352, DOI 10.1109/TSA.2002.803443
Perrodin F, 2012, IEEE INT C INT ROBOT, P4596, DOI 10.1109/IROS.2012.6385985
Portello A., 2012, P IEEE RSJ INT C INT, P3294
Raj B, 2004, SPEECH COMMUN, V43, P275, DOI 10.1016/j.specom.2004.03.007
Rodemann T, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-12, P860, DOI 10.1109/IROS.2006.281738
Saruwatari H., 2005, P IEEE RSJ INT C INT, P209
Sasaki Y, 2012, IEEE INT C INT ROBOT, P713, DOI 10.1109/IROS.2012.6385877
Schmidt M. N., 2010, EUR SIGN PROC C EUSI
Shimoda T, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-12, P386, DOI 10.1109/IROS.2006.281827
Tezuka T, 2014, IEEE INT CONF ROBOT, P6293, DOI 10.1109/ICRA.2014.6907787
Valin J.-M., 2004, P IEEE INT C ROB AUT
Virtanen T, 2007, IEEE T AUDIO SPEECH, V15, P1066, DOI 10.1109/TASL.2006.885253
YAMAMOTO S, 2007, 2007 IEEE WORKSH, P111
Yoshida T., 2012, P HUM 2012, P370
NR 30
TC 0
Z9 0
U1 0
U2 0
PU FUJI TECHNOLOGY PRESS LTD
PI TOKYO
PA 4F TORANOMON SANGYO BLDG, 2-29, TORANOMON 1-CHOME, MINATO-KU, TOKYO,
105-0001, JAPAN
SN 0915-3942
EI 1883-8049
J9 J ROBOT MECHATRON
JI J. Robot. Mechatron.
PD FEB
PY 2017
VL 29
IS 1
BP 114
EP 124
DI 10.20965/jrm.2017.p0114
PG 11
WC Robotics
SC Robotics
GA EM0FG
UT WOS:000394993600012
DA 2018-01-22
ER

PT J
AU Ohkita, M
Bando, Y
Nakamura, E
Itoyama, K
Yoshii, K
AF Ohkita, Misato
Bando, Yoshiaki
Nakamura, Eita
Itoyama, Katsutoshi
Yoshii, Kazuyoshi
TI Audio-Visual Beat Tracking Based on a State-Space Model for a Robot
Dancer Performing with a Human Dancer
SO JOURNAL OF ROBOTICS AND MECHATRONICS
LA English
DT Article
DE robot dancer; real-time beat tracking; state-space model; audio-visual
integration
ID AUDIO; SYNCHRONIZATION; SYSTEM; MUSIC
AB This paper presents a real-time beat-tracking method that integrates audio and
visual information in a probabilistic manner to enable a humanoid robot to dance in
synchronization with music and human dancers. Most conventional music robots have
focused on either music audio signals or movements of human dancers to detect and
predict beat times in real time. Since a robot needs to record music audio signals
with its own microphones, however, the signals are severely contaminated with loud
environmental noise. To solve this problem, we propose a state-space model that
encodes a pair of a tempo and a beat time in a state-space and represents how
acoustic and visual features are generated from a given state. The acoustic
features consist of tempo likelihoods and onset likelihoods obtained from music
audio signals and the visual features are tempo likelihoods obtained from dance
movements. The current tempo and the next beat time are estimated in an online
manner from a history of observed features by using a particle filter. Experimental
results show that the proposed multi-modal method using a depth sensor (Kinect) to
extract skeleton features outperformed conventional mono-modal methods in terms of
beat-tracking accuracy in a noisy and reverberant environment. Keywords: robot
dancer,
C1 [Ohkita, Misato; Bando, Yoshiaki; Nakamura, Eita; Itoyama, Katsutoshi; Yoshii,
Kazuyoshi] Kyoto Univ, Grad Sch Informat, Sakyo Ku, Kyoto 6068501, Japan.
RP Ohkita, M (reprint author), Kyoto Univ, Grad Sch Informat, Sakyo Ku, Kyoto
6068501, Japan.
EM ohkita@sap.ist.i.kyoto-u.ac.jp; bando@sap.ist.i.kyoto-u.ac.jp;
enakamura@sap.ist.i.kyoto-u.ac.jp; itoyama@sap.ist.i.kyoto-u.ac.jp;
yoshii@sap.ist.i.kyoto-u.ac.jp
FU JSPS KAKENHI [24220006, 26700020, 24700168, 26280089, 15K16054,
16J05486]; JST CREST OngaCREST project; JST CREST ACCEL OngaACCEL
project
FX This study was supported in part by JSPS KAKENHI Nos. 24220006,
26700020, 24700168, 26280089, 15K16054, and 16J05486 as well as the JST
CREST OngaCREST and ACCEL OngaACCEL projects.
CR Arulampalam MS, 2002, IEEE T SIGNAL PROCES, V50, P174, DOI 10.1109/78.978374
Berman D. R., 2012, THESIS
Bock S., 2016, INT SOC MUS INF RETR, P255
Chu WT, 2012, IEEE T MULTIMEDIA, V14, P129, DOI 10.1109/TMM.2011.2172401
Davies MEP, 2007, IEEE T AUDIO SPEECH, V15, P1009, DOI 10.1109/TASL.2006.885257
Dixon S, 2007, J NEW MUSIC RES, V36, P39, DOI 10.1080/09298210701653310
Durand S., 2016, INT SOC MUS INF RETR, P386
Ellis DPW, 2007, J NEW MUSIC RES, V36, P51, DOI 10.1080/09298210701653344
Elowsson A., 2016, INT SOC MUS INF RETR, P351
Goto M, 2001, J NEW MUSIC RES, V30, P159, DOI 10.1076/jnmr.30.2.159.7114
Guedes C., 2006, SOUND MUS COMP C SMC, P25
Itohara T, 2011, IEEE INT C INT ROBOT, P118, DOI 10.1109/IROS.2011.6048380
Kosuge K., 2008, SICE J CONTROL MEASU, V1, P74
Krebs F., 2016, INT SOC MUS INF RETR, P129
Kusuda Y, 2008, IND ROBOT, V35, P504, DOI 10.1108/01439910810909493
Lim A, 2010, IEEE INT C INT ROBOT, P1964, DOI 10.1109/IROS.2010.5650427
Maruo S., 2016, THESIS
Murata K, 2008, IEEE-RAS INT C HUMAN, P79, DOI 10.1109/ICHR.2008.4755935
Nakadai K., 2002, J ROBOTICS MECHATRON, V14, P479
Nakadai K, 2010, ADV ROBOTICS, V24, P739, DOI 10.1163/016918610X493561
Nakaoka Shin'ichiro, 2011, Synthesiology, V4, P80, DOI 10.5571/synth.4.80
Ohkita M, 2015, IEEE INT C INT ROBOT, P5555, DOI 10.1109/IROS.2015.7354164
Oliveira JL, 2012, IEEE INT C INT ROBOT, P992, DOI 10.1109/IROS.2012.6386100
Petersen K, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P313, DOI
10.1109/IROS.2008.4650831
Petersen K, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P2303, DOI 10.1109/IROS.2009.5354550
RASCH RA, 1979, ACUSTICA, V43, P121
Sasaki Y., 2010, J ROBOTICS MECHATRON, V22, P402
Sasaki Y., 2007, J ROBOTICS MECHATRON, V19, P281
Shiratori T, 2003, PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON
MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS, P89, DOI 10.1109/MFI-
2003.2003.1232638
Stark A. M., 2009, INT C DIG AUD EFF DA, P299
Takeda R., 2007, IEEE RSJ IROS, P1757
Weinberg G., 2009, INT C HUM ROB INT HR, P233
NR 32
TC 0
Z9 0
U1 2
U2 2
PU FUJI TECHNOLOGY PRESS LTD
PI TOKYO
PA 4F TORANOMON SANGYO BLDG, 2-29, TORANOMON 1-CHOME, MINATO-KU, TOKYO,
105-0001, JAPAN
SN 0915-3942
EI 1883-8049
J9 J ROBOT MECHATRON
JI J. Robot. Mechatron.
PD FEB
PY 2017
VL 29
IS 1
BP 125
EP 136
DI 10.20965/jrm.2017.p0125

PG 12
WC Robotics
SC Robotics
GA EM0FG
UT WOS:000394993600013
DA 2018-01-22
ER

PT J
AU Caron, S
Pham, QC
Nakamura, Y
AF Caron, Stephane
Quang-Cuong Pham
Nakamura, Yoshihiko
TI ZMP Support Areas for Multicontact Mobility Under Frictional Constraints
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Contact stability; humanoid locomotion; zerotilting moment point (ZMP)
ID HUMANOID ROBOTS; LEGGED ROBOTS; BIPED ROBOTS; CONTACT; STABILITY;
FORCES; FOOT; GENERATION; DYNAMICS; BALANCE
AB We propose a method for checking and enforcing multicontact stability based on
the zero-tilting moment point (ZMP). The key to our development is the
generalization of ZMP support areas to take into account: 1) frictional constraints
and 2) multiple noncoplanar contacts. We introduce and investigate two kinds of ZMP
support areas. First, we characterize and provide a fast geometric construction for
the support area generated by valid contact forces, with no other constraint on the
robot motion. We call this set the full support area. Next, we consider the control
of humanoid robots by using the linear pendulum mode (LPM). We observe that the
constraints stemming from the LPM induce a shrinking of the support area, even for
walking on horizontal floors. We propose an algorithm to compute the new area,
which we call the pendular support area. We show that, in the LPM, having the ZMP
in the pendular support area is a necessary and sufficient condition for contact
stability. Based on these developments, we implement a whole-body controller and
generate feasible multicontact motions where an HRP-4 humanoid locomotes in
challenging multicontact scenarios.
C1 [Caron, Stephane] Univ Montpellier, CNRS, LIRMM, F-34090 Montpellier, France.
[Quang-Cuong Pham] Nanyang Technol Univ, Sch Mech & Aerosp Engn, Singapore
639798, Singapore.
[Nakamura, Yoshihiko] Univ Tokyo, Dept Mechanoinformat, Tokyo, Japan.
RP Caron, S (reprint author), Univ Montpellier, CNRS, LIRMM, F-34090 Montpellier,
France.
EM stephane.caron@normalesup.org; cuong.pham@normalesup.org;
nakamura@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014; Pham, Quang-Cuong/G-5050-2012
OI Nakamura, Yoshihiko/0000-0001-7162-5102;
FU NEDO International R&D and Demonstrative Project in Environmental and
Medical Fields/International RD; Demonstrative Project in Robotics
Fields/"Research and Development of Robot Open Platform for Disaster
Response"
FX This work was supported in part by the 2014-2015 NEDO International R&D
and Demonstrative Project in Environmental and Medical
Fields/International R&D and in part by the Demonstrative Project in
Robotics Fields/" Research and Development of Robot Open Platform for
Disaster Response" (PI: Y. Nakamura). (Corresponding author: S. Caron.)
CR Audren H, 2014, IEEE INT C INT ROBOT, P4030, DOI 10.1109/IROS.2014.6943129
Balkcom DJ, 2002, INT J ROBOT RES, V21, P1053, DOI 10.1177/0278364902021012003
Bretl T, 2008, IEEE T ROBOT, V24, P794, DOI 10.1109/TRO.2008.2001360
Caron S., 2015, ROBOT SCI SYST
Caron S, 2015, IEEE INT CONF ROBOT, P5107, DOI 10.1109/ICRA.2015.7139910
Cisneros R, 2014, 2014 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND
AUTOMATION (IEEE ICMA 2014), P119, DOI 10.1109/ICMA.2014.6885682
Englsberger J, 2015, IEEE T ROBOT, V31, P355, DOI 10.1109/TRO.2015.2405592
Escande A, 2013, ROBOT AUTON SYST, V61, P428, DOI 10.1016/j.robot.2013.01.008
Fukuda K., 1996, Combinatorics and Computer Science. 8th Franco-Japanese and 4th
Franco-Chinese Conference. Selected Papers, P91
Goswami A, 1999, INT J ROBOT RES, V18, P523, DOI 10.1177/02783649922066376
Harada K, 2006, INT J HUM ROBOT, V3, P1, DOI 10.1142/S0219843606000643
Harada K, 2006, IEEE T ROBOT, V22, P568, DOI 10.1109/TRO.2006.870649
Hirukawa H, 2006, IEEE INT CONF ROBOT, P1976, DOI 10.1109/ROBOT.2006.1641995
Hyon SH, 2007, IEEE T ROBOT, V23, P884, DOI 10.1109/TRO.2007.904896
Inomata Kentaro, 2010, Proceedings of the 11th IEEE International Workshop on
Advanced Motion Control (AMC 2010), P402, DOI 10.1109/AMC.2010.5464098
Kagami S, 2002, AUTON ROBOT, V12, P71, DOI 10.1023/A:1013210909840
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
KAJITA S, 1991, 1991 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS 1-3, P1405, DOI 10.1109/ROBOT.1991.131811
Kajita S., 2003, P IEEE INT C ROB AUT, V2, P1620
Lee SH, 2010, IEEE INT C INT ROBOT, P3157, DOI 10.1109/IROS.2010.5650416
Mitobe K., 2004, P IEEE RSJ INT C INT, V3, P2253
Morisawa M, 2005, IEEE INT CONF ROBOT, P2405
Morisawa M, 2014, 2014 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION
(SII), P34, DOI 10.1109/SII.2014.7028007
Nagasaka K., 2012, P ROB S, V17, P134
Ott C., 2011, P IEEE RAS INT C HUM, P26
Pang JS, 2000, Z ANGEW MATH MECH, V80, P643, DOI 10.1002/1521-
4001(200010)80:10<643::AID-ZAMM643>3.0.CO;2-E
Qiu Z., 2011, P INT S DIG HUM MOD
Pham QC, 2017, INT J ROBOT RES, V36, P44, DOI 10.1177/0278364916675419
Pham QC, 2015, IEEE-ASME T MECH, V20, P3257, DOI 10.1109/TMECH.2015.2409479
Pham QC, 2014, IEEE T ROBOT, V30, P1533, DOI 10.1109/TRO.2014.2351113
Righetti L, 2013, INT J ROBOT RES, V32, P280, DOI 10.1177/0278364912469821
Saida T., 2003, P IEEE INT C ROB AUT, V3, P3815
Sardain P, 2004, IEEE T SYST MAN CY A, V34, P630, DOI 10.1109/TSMCA.2004.832811
Sato T, 2011, IEEE T IND ELECTRON, V58, P1385, DOI 10.1109/TIE.2010.2050753
Shibuya M., 2006, P 32 ANN C IEEE IND, P4094
Sugihara T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1404, DOI 10.1109/ROBOT.2002.1014740
Tedrake R, 2015, IEEE-RAS INT C HUMAN, P936, DOI 10.1109/HUMANOIDS.2015.7363473
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
Wieber PB, 2006, LECT NOTES CONTR INF, V340, P411
Zhao Y., 2012, P IEEE RAS INT C HUM, P726
Zheng Y, 2015, IEEE T ROBOT, V31, P988, DOI 10.1109/TRO.2015.2451411
NR 41
TC 1
Z9 1
U1 5
U2 7
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD FEB
PY 2017
VL 33
IS 1
BP 67
EP 80
DI 10.1109/TRO.2016.2623338
PG 14
WC Robotics
SC Robotics
GA EM9RB
UT WOS:000395647700005
DA 2018-01-22
ER

PT J
AU Falotico, E
Cauli, N
Kryczka, P
Hashimoto, K
Berthoz, A
Takanishi, A
Dario, P
Laschi, C
AF Falotico, Egidio
Cauli, Nino
Kryczka, Przemyslaw
Hashimoto, Kenji
Berthoz, Alain
Takanishi, Atsuo
Dario, Paolo
Laschi, Cecilia
TI Head stabilization in a humanoid robot: models and implementations
SO AUTONOMOUS ROBOTS
LA English
DT Article
DE Humanoid robot; Head stabilization; Gaze stabilization; VOR; VCR
ID VESTIBULOOCULAR REFLEX; GAZE STABILIZATION; LOCOMOTOR TASKS; WALKING;
MECHANISMS; STABILITY; MOVEMENTS; FEEDBACK; SURFACE; BODY
AB Neuroscientific studies show that humans tend to stabilize their head
orientation, while accomplishing a locomotor task. This is beneficial to image
stabilization and in general to keep a reference frame for the body. In robotics,
too, head stabilization during robot walking provides advantages in robot vision
and gaze-guided locomotion. In order to obtain the head movement behaviors found in
human walk, it is necessary and sufficient to be able to control the orientation
(roll, pitch and yaw) of the head in space. Based on these principles, three
controllers have been designed. We developed two classic robotic controllers, an
inverse kinematics based controller, an inverse kinematics differential controller
and a bio-inspired adaptive controller based on feedback error learning. The
controllers use the inertial feedback from a IMU sensor and control neck joints in
order to align the head orientation with the global orientation reference. We
present the results for the head stabilization controllers, on two sets of
experiments, validating the robustness of the proposed control methods. In
particular, we focus our analysis on the effectiveness of the bio-inspired adaptive
controller against the classic robotic controllers. The first set of experiments,
tested on a simulated robot, focused on the controllers response to a set of
disturbance frequencies and a step function. The other set of experiments were
carried out on the SABIAN robot, where these controllers were implemented in
conjunction with a model of the vestibulo-ocular reflex (VOR) and opto-kinetic
reflex (OKR). Such a setup permits to compare the performances of the considered
head stabilization controllers in conditions which mimic the human stabilization
mechanisms composed of the joint effect of VOR, OKR and stabilization of the head.
The results show that the bio-inspired adaptive controller is more beneficial for
the stabilization of the head in tasks involving a sinusoidal torso disturbance,
and it shows comparable performances to the inverse kinematics controller in case
of the step response and the locomotion experiments conducted on the real robot.
C1 [Falotico, Egidio; Cauli, Nino; Dario, Paolo; Laschi, Cecilia] Scuola Super Sant
Anna, BioRobot Inst, Viale Rinaldo Piaggio 34, I-56025 Pontedera, PI, Italy.
[Kryczka, Przemyslaw] Ist Italiano Tecnol, Dept Adv Robot, Via Morego 30, I-
16163 Genoa, Italy.
[Hashimoto, Kenji] Waseda Univ, Fac Sci & Engn, Shinjuku Ku, 41-304,17 Kikui
Cho, Tokyo 1620044, Japan.
[Takanishi, Atsuo] Waseda Univ, Dept Modern Mech Engn, Shinjuku Ku, 41-304,17
Kikui Cho, Tokyo 1620044, Japan.
[Berthoz, Alain] Coll France, CNRS, LPPA, 11 Pl Marcelin Berthelot, F-75005
Paris, France.
RP Falotico, E (reprint author), Scuola Super Sant Anna, BioRobot Inst, Viale
Rinaldo Piaggio 34, I-56025 Pontedera, PI, Italy.
EM egidio.falotico@sssup.it; nino.cauli@sssup.it;
k-hashimoto@takanishi.mech.waseda.ac.jp;
alain.berthoz@college-de-france.fr; takanisi@waseda.jp;
paolo.dario@sssup.it; cecilia.laschi@sssup.it
RI Hashimoto, Kenji/M-6442-2017
OI Hashimoto, Kenji/0000-0003-2300-6766; Cauli, Nino/0000-0002-9611-0655;
Laschi, Cecilia/0000-0001-5248-1043
FU European Commission in the ICT STREP RoboSoM Project [248366]; Italian
Ministry of Foreign Affairs, General Directorate for the Promotion of
the "Country System", Bilateral and Multilateral Scientific and
Technological Cooperation Unit
FX This work is supported by European Commission in the ICT STREP RoboSoM
Project under Contract No. 248366. The authors would like to thank the
Italian Ministry of Foreign Affairs, General Directorate for the
Promotion of the "Country System", Bilateral and Multilateral Scientific
and Technological Cooperation Unit, for the support through the Joint
Laboratory on Biorobotics Engineering Project.
CR BARNES GR, 1993, PROG NEUROBIOL, V41, P435, DOI 10.1016/0301-0082(93)90026-O
Beira R, 2006, IEEE INT CONF ROBOT, P94, DOI 10.1109/ROBOT.2006.1641167
Benallegue Mehdi, 2013, 2013 IEEE International Conference on Robotics and
Automation (ICRA), P5638, DOI 10.1109/ICRA.2013.6631387
Bernardin D, 2012, EXP BRAIN RES, V223, P65, DOI 10.1007/s00221-012-3241-2
Berthoz A, 2002, BRAINS SENSE MOVEMEN
Falotico E., 2012, 2012 RO-MAN: The 21st IEEE International Symposium on Robot
and Human Interactive Communication, P449, DOI 10.1109/ROMAN.2012.6343793
Falotico Egidio, 2011, 2011 11th IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2011), P440, DOI 10.1109/Humanoids.2011.6100912
Farkhatdinov Ildar, 2011, 2011 11th IEEE-RAS International Conference on
Humanoid Robots (Humanoids 2011), P147, DOI 10.1109/Humanoids.2011.6100859
Franchi E, 2010, 2010 10th IEEE-RAS International Conference on Humanoid Robots
(Humanoids 2010), P251, DOI 10.1109/ICHR.2010.5686329
Franklin G. F., 2002, FEEDBACK CONTROL DYN
Grasso R, 1998, NEUROSCI LETT, V253, P115, DOI 10.1016/S0304-3940(98)00625-9
Hashimoto K, 2012, IEEE INT C INT ROBOT, P2064, DOI 10.1109/IROS.2012.6385684
Hicheur H, 2005, NEUROSCI LETT, V383, P87, DOI 10.1016/j.neulet.2005.03.046
Hirasaki E, 1999, EXP BRAIN RES, V127, P117, DOI 10.1007/s002210050781
Imai T, 2001, EXP BRAIN RES, V136, P1, DOI 10.1007/s002210000533
Kadone H, 2010, 2010 IEEE RO-MAN, P552, DOI 10.1109/ROMAN.2010.5598631
Kang HJ, 2012, P IEEE RAS-EMBS INT, P669, DOI 10.1109/BioRob.2012.6290870
Kavanagh J, 2006, EXP BRAIN RES, V172, P454, DOI 10.1007/s00221-006-0353-6
Kawato M., 1990, ADV NEURAL COMPUTERS, V6, P365
Kryczka P, 2012, P IEEE RAS-EMBS INT, P675, DOI 10.1109/BioRob.2012.6290906
Kryczka P, 2012, IEEE INT C INT ROBOT, P2076, DOI 10.1109/IROS.2012.6386177
MacLellan MJ, 2006, EXP BRAIN RES, V173, P531, DOI 10.1007/s00221-006-0398-6
MacLellan MJ, 2006, EXP BRAIN RES, V173, P521, DOI 10.1007/s00221-006-0399-5
Marcinkiewicz Marek, 2009, 2009 IEEE International Conference on Robotics and
Automation (ICRA), P2512, DOI 10.1109/ROBOT.2009.5152685
Moore ST, 2001, ANN NY ACAD SCI, V942, P139
Moore ST, 1999, EXP BRAIN RES, V129, P347, DOI 10.1007/s002210050903
Ogura Y, 2006, IEEE INT CONF ROBOT, P76
Oliveira Miguel, 2011, International Journal of Natural Computing Research, V2,
P39, DOI 10.4018/jncr.2011010103
Porrill J, 2004, P ROY SOC B-BIOL SCI, V271, P789, DOI 10.1098/rspb.2003.2658
POZZO T, 1990, EXP BRAIN RES, V82, P97
POZZO T, 1991, EXP BRAIN RES, V85, P208
Santos C., 2009, 2009 IEEE INT C ROB, P2294
Schweigart G, 1997, VISION RES, V37, P1643, DOI 10.1016/S0042-6989(96)00315-X
Shibata T, 2001, ADAPT BEHAV, V9, P189, DOI 10.1177/10597123010093005
Shibata T, 2001, NEURAL NETWORKS, V14, P201, DOI 10.1016/S0893-6080(00)00084-8
Sreenivasa MN, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT
ROBOTS AND SYSTEMS, P4451, DOI 10.1109/IROS.2009.5354503
Taiana M, 2010, ROBOT AUTON SYST, V58, P784, DOI 10.1016/j.robot.2010.02.010
Viollet S, 2005, ROBOT AUTON SYST, V50, P147, DOI 10.1016/j.robot.2004.09.014
Yamada H., 2007, P IROS 2007, P3566
NR 39
TC 2
Z9 2
U1 4
U2 10
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 0929-5593
EI 1573-7527
J9 AUTON ROBOT
JI Auton. Robot.
PD FEB
PY 2017
VL 41
IS 2
BP 349
EP 365
DI 10.1007/s10514-016-9583-z
PG 17
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA EK3HO
UT WOS:000393817500005
DA 2018-01-22
ER

PT J
AU Zhang, XZ
Zhang, JF
Zhong, JP
AF Zhang, Xinzheng
Zhang, Jianfen
Zhong, Junpei
TI Skill Learning for Intelligent Robot by Perception-Action Integration: A
View from Hierarchical Temporal Memory
SO COMPLEXITY
LA English
DT Article
ID ACTION CYCLE; SENSORIMOTOR; REPRESENTATIONS
AB Skill learning autonomously through interactions with the environment is a
crucial ability for intelligent robot. A perception-action integration or
sensorimotor cycle, as an important issue in imitation learning, is a natural
mechanism without the complex program process. Recently, neurocomputing model and
developmental intelligence method are considered as a new trend for implementing
the robot skill learning. In this paper, based on research of the human brain
neocortex model, we present a skill learning method by perception-action
integration strategy from the perspective of hierarchical temporal memory (HTM)
theory. The sequential sensor data representing a certain skill from a RGB-D camera
are received and then encoded as a sequence of Sparse Distributed Representation
(SDR) vectors. The sequential SDR vectors are treated as the inputs of the
perception-action HTM. The HTM learns sequences of SDRs and makes predictions of
what the next input SDR will be. It stores the transitions of the current perceived
sensor data and next predicted actions. We evaluated the performance of this
proposed framework for learning the shaking hands skill on a humanoid NAO robot.
The experimental results manifest that the skill learning method designed in this
paper is promising.
C1 [Zhang, Xinzheng; Zhang, Jianfen] Jinan Univ, Sch Elect & Informat Engn, Zhuhai,
Peoples R China.
[Zhong, Junpei] Natl Inst Adv Ind Sci & Technol, Tokyo, Japan.
RP Zhang, JF (reprint author), Jinan Univ, Sch Elect & Informat Engn, Zhuhai,
Peoples R China.
EM eejfzhang@gmail.com
FU National Natural Science Foundation of China [61203338]; NuPIC Open
Source Project
FX This work was supported by National Natural Science Foundation of China
under Grant no. 61203338. The authors thank the NuPIC Open Source
Project and all the contributors of the NuPIC codes.
CR Ahmad S, 2017, NEUROCOMPUTING, V262, P134, DOI 10.1016/j.neucom.2017.04.070
Di Paolo EA, 2014, FRONT HUM NEUROSCI, V8, DOI 10.3389/fnhum.2014.00551
Blakemore SJ, 2001, NAT REV NEUROSCI, V2, P561
Cassimatis NL, 2004, ROBOT AUTON SYST, V49, P13, DOI 10.1016/j.robot.2004.07.014
Cutsuridis V, 2013, COGN COMPUT, V5, P383, DOI 10.1007/s12559-013-9218-z
Do M, 2014, IEEE INT CONF ROBOT, P1858, DOI 10.1109/ICRA.2014.6907103
Du Y, 2015, PROC CVPR IEEE, P1110, DOI 10.1109/CVPR.2015.7298714
Fuster JM, 2004, TRENDS COGN SCI, V8, P143, DOI 10.1016/j.tics.2004.02.004
Guedjou H, 2016, 2016 JOINT IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND
LEARNING AND EPIGENETIC ROBOTICS (ICDL-EPIROB), P193, DOI
10.1109/DEVLRN.2016.7846817
Hawkins J., 2004, INTELLIGENCE
Hawkins J., 2016, BIOL MACHINE INTELLI
Huber D, 2012, NATURE, V484, P473, DOI 10.1038/nature11039
Knoblich G, 2001, PSYCHOL SCI, V12, P467, DOI 10.1111/1467-9280.00387
Kording KP, 2004, NATURE, V427, P244, DOI 10.1038/nature02169
Kurzweil Ray, 2013, CREATE MIND SECRET H
Lalanne C, 2004, J PHYSIOLOGY-PARIS, V98, P265, DOI
10.1016/j.jphysparis.2004.06.001
Monroe AE, 2015, PLOS ONE, V10, DOI 10.1371/journal.pone.0119841
Piater J, 2011, INT J ROBOT RES, V30, P294, DOI 10.1177/0278364910382464
Rozado D, 2012, NEUROCOMPUTING, V79, P75, DOI 10.1016/j.neucom.2011.10.005
Sadeghipour A, 2011, COGN COMPUT, V3, P419, DOI 10.1007/s12559-010-9082-z
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Schwartz JL, 2012, J NEUROLINGUIST, V25, P336, DOI
10.1016/j.jneuroling.2009.12.004
Shi T, 2015, J INTELL FUZZY SYST, V28, P1955, DOI 10.3233/IFS-141204
Sigaud O, 2010, STUD COMPUT INTELL, V264, P1, DOI 10.1007/978-3-642-05181-4
WIESER E, 2016, FRANCE, P43
Wolpert DM, 2011, NAT REV NEUROSCI, V12, P739, DOI 10.1038/nrn3112
Yea-Shuan Huang, 2013, IAENG International Journal of Computer Science, V40, P87
Zhong JP, 2014, FRONT BEHAV NEUROSCI, V8, DOI 10.3389/fnbeh.2014.00022
NR 28
TC 0
Z9 0
U1 1
U2 1
PU WILEY-HINDAWI
PI LONDON
PA ADAM HOUSE, 3RD FL, 1 FITZROY SQ, LONDON, WIT 5HE, ENGLAND
SN 1076-2787
EI 1099-0526
J9 COMPLEXITY
JI Complexity
PY 2017
AR 7948684
DI 10.1155/2017/7948684
PG 16
WC Mathematics, Interdisciplinary Applications; Multidisciplinary Sciences
SC Mathematics; Science & Technology - Other Topics
GA FN4JC
UT WOS:000415970500001
OA gold
DA 2018-01-22
ER

PT J
AU Tajitsu, Y
AF Tajitsu, Y.
TI Piezoelectric poly-L-lactic acid fabric and its application to control
of humanoid robot
SO FERROELECTRICS
LA English
DT Article; Proceedings Paper
CT Joint 25th IEEE International Symposium on the Applications of
Ferroelectrics (ISAF) / 13th European Conference on Application of Polar
Dielectrics (ECAPD) / Piezoelectric Force Microscopy Workshop (PFM)
CY AUG 21-25, 2016
CL Tech Univ Darmstadt, Darmstadt, GERMANY
SP IEEE
HO Tech Univ Darmstadt
DE Piezoelectric polymer; fiber; chiral polymer; poly-L-lactic acid;
sensor; fabric
ID WAVE-GUIDES; LITHIUM-NIOBATE; PHOTOREFRACTIVE PROPERTIES; CRYSTALS
AB We report the potential of a system that enables the monitoring of complex human
movements via new smart fabrics based on a piezoelectric poly-L-lactic acid (PLLA)
fabric that have the function of sensing complex human motion. In particular,
piezoelectric PLLA fabrics with plain, twill, and satin weaves were produced by
Teijin Limited, Japan, a joint research company. First, a smart fabric was
developed that consisted of pieces of the piezoelectric PLLA fabric with satin,
plain, and twill weaves joined in the horizontal direction. The smart fabric,
designed to accurately detect the twisting, bending, and elongation of the smart
fabric itself was fabricated using a technique for sewing Japanese kimonos.
Finally, we developed a prototype system that allows simple human motion to be
detected through the motion sensing of clothes capable of being worn on the joints,
such as the elbows, shoulders, knees, and waist, obtained by sewing together pieces
of the smart fabric. This motion was linked with that of a humanoid robot.
C1 [Tajitsu, Y.] Kansai Univ, Fac Sci & Engn, Elect Engn Dept, 3-3-35 Yamate,
Suita, Osaka 5648680, Japan.
RP Tajitsu, Y (reprint author), Kansai Univ, Fac Sci & Engn, Elect Engn Dept, 3-3-
35 Yamate, Suita, Osaka 5648680, Japan.
EM tajitsu@kansai-u.ac.jp
FU Ministry of Education, Culture, Sports, Science and Technology of Japan
[24655108, 15K13714]
FX This work was supported in part by Grants-in-Aid for Scientific Research
(Nos. 24655108 and 15K13714) from the Ministry of Education, Culture,
Sports, Science and Technology of Japan.
CR Bazzan M., 2015, APPL PHYS REV, V2
Chen F, 2007, OPT MATER, V29, P1523, DOI 10.1016/j.optmat.2006.08.001
Courjal N, 2015, OPT EXPRESS, V23, P13983, DOI 10.1364/OE.23.013983
Davydov SA, 2010, PHYS WAVE PHENOM, V18, P1, DOI 10.3103/S1541308X10010012
He RY, 2014, OPT EXPRESS, V22, P31293, DOI 10.1364/OE.22.031293
Imbrock J, 2002, J OPT SOC AM B, V19, P1822, DOI 10.1364/JOSAB.19.001822
Kanshu A, 2009, APPL PHYS B-LASERS O, V95, P537, DOI 10.1007/s00340-009-3465-4
Kip D, 1998, APPL PHYS B-LASERS O, V67, P131, DOI 10.1007/s003400050485
KRATZIG E, 1990, FERROELECTRICS, V104, P257
Kroesen S, 2014, OPT EXPRESS, V22, P23339, DOI 10.1364/OE.22.023339
Menard JM, 2007, APPL OPTICS, V46, P2119, DOI 10.1364/AO.46.002119
Milam D, 1998, APPL OPTICS, V37, P546, DOI 10.1364/AO.37.000546
Peithmann K, 2000, PHYS REV B, V61, P4615, DOI 10.1103/PhysRevB.61.4615
Popov V. L., 1991, Soviet Physics - Technical Physics, V36, P1380
Rabus DG, 2007, IEEE J SEL TOP QUANT, V13, P1249, DOI 10.1109/JSTQE.2007.906043
Yamada H, 2006, IEEE J SEL TOP QUANT, V12, P1371, DOI 10.1109/JSTQE.2006.880611
NR 16
TC 0
Z9 0
U1 0
U2 0
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0015-0193
EI 1563-5112
J9 FERROELECTRICS
JI Ferroelectrics
PY 2017
VL 515
IS 1
SI SI
BP 44
EP 58
DI 10.1080/00150193.2017.1360107
PG 15
WC Materials Science, Multidisciplinary; Physics, Condensed Matter
SC Materials Science; Physics
GA FL3TW
UT WOS:000414148100007
DA 2018-01-22
ER

PT J
AU Van Heerden, K
AF Van Heerden, Kirill
TI Real-Time Variable Center of Mass Height Trajectory Planning for
Humanoids Robots
SO IEEE ROBOTICS AND AUTOMATION LETTERS
LA English
DT Article
DE Humanoid and bipedal locomotion; motion and path planning
ID OPTIMIZATION; WALKING
AB This paper presents a trajectory planner for humanoid robots based on
nonlinearmodel predictive control that can generate trajectories with variable
center of mass (COM) heights and adaptive foot positions in real-time. This is done
by analytically formulating the zero moment point constraint as a quadratic
function that includes the vertical COM state. This constraint is then combined
with linear foot position constraints, constraints on the vertical COM state and a
quadratic goal function to form a quadratically constrained quadratic program. This
program is then solved via sequential quadratic programing. It can be solved in
less than 5 ms on average with a 3.4 GHz CPU by taking advantage of the fact that
the analytic problem formulation allows the analytical gradients to be computed,
thus time consuming numerical differentiation is avoided and the gradient
computation time is reduced from 12.4 to 0.12 ms. An experiment on a 3.5 Kg robot
is conducted to verify the effectiveness of this algorithm in walking on a terrain
where the two feet are at different heights. Lastly it is shown via simulation that
the proposed approach can improve disturbance recovery capacity by up to 7%.
C1 [Van Heerden, Kirill] Ritsumeikan Univ, Dept Robot, Kusatsu 5258577, Japan.
RP Van Heerden, K (reprint author), Ritsumeikan Univ, Dept Robot, Kusatsu 5258577,
Japan.
EM kirillvh@ieee.org
FU JSPS KAKENHI [14536649]
FX This work was supported by JSPS KAKENHI under Grant 14536649.
CR Audren H, 2014, IEEE INT C INT ROBOT, P4030, DOI 10.1109/IROS.2014.6943129
Diedam H, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT
SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P1121, DOI 10.1109/IROS.2008.4651055
Englsberger J, 2013, IEEE INT C INT ROBOT, P2600, DOI 10.1109/IROS.2013.6696723
GOLDFARB D, 1983, MATH PROGRAM, V27, P1, DOI 10.1007/BF02591962
Herdt A., 2012, THESIS
Herzog A, 2015, IEEE-RAS INT C HUMAN, P874, DOI 10.1109/HUMANOIDS.2015.7363464
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Kajita S, 2005, IEEE INT CONF ROBOT, P616
KAJITA S, 1991, 1991 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS 1-3, P1405, DOI 10.1109/ROBOT.1991.131811
Konno A, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT
SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P647, DOI 10.1109/IROS.2008.4650751
Kormushev P, 2011, IEEE INT C INT ROBOT, P318, DOI 10.1109/IROS.2011.6048037
Liu YP, 2015, IEEE INT CONF ROBOT, P5710, DOI 10.1109/ICRA.2015.7139999
Nelson G., 2012, J ROBOTICS SOC JAPAN, V30, P372
Nishiwaki K, 2011, IEEE INT CONF ROBOT, P2029
Nocedal J, 2006, SPRINGER SER OPER RE, P1, DOI 10.1007/978-0-387-40065-5
Pratt J, 2006, IEEE-RAS INT C HUMAN, P200, DOI 10.1109/ICHR.2006.321385
Schulman J, 2014, INT J ROBOT RES, V33, P1251, DOI 10.1177/0278364914528132
Stephens B., 2011, THESIS
Tajima R., 2009, P IEEE INT C ROB AUT, P1571
Van Heerden K, 2015, ADV ROBOTICS, V29, P413, DOI 10.1080/01691864.2014.987166
Van Heerden K, 2015, IEEE INT CONF ROBOT, P6275, DOI 10.1109/ICRA.2015.7140080
Wieber PB, 2006, IEEE-RAS INT C HUMAN, P137, DOI 10.1109/ICHR.2006.321375
Zhao Y, 2012, IEEE-RAS INT C HUMAN, P726, DOI 10.1109/HUMANOIDS.2012.6651600
NR 23
TC 3
Z9 3
U1 0
U2 0
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 2377-3766
J9 IEEE ROBOT AUTOM LET
JI IEEE Robot. Autom. Lett.
PD JAN
PY 2017
VL 2
IS 1
BP 135
EP 142
DI 10.1109/LRA.2016.2579741
PG 8
WC Robotics
SC Robotics
GA FK8BL
UT WOS:000413732300019
DA 2018-01-22
ER

PT J
AU Ito, S
Nishio, S
Fukumoto, Y
Matsushita, K
Sasaki, M
AF Ito, Satoshi
Nishio, Shingo
Fukumoto, Yuuki
Matsushita, Kojiro
Sasaki, Minoru
TI Gravity Compensation and Feedback of Ground Reaction Forces for Biped
Balance Control
SO APPLIED BIONICS AND BIOMECHANICS
LA English
DT Article
ID HUMANOID ROBOT; WALKING
AB This paper considers the balance control of a biped robot under a constant
external force or on a sloped ground. We have proposed a control method with
feedback of the ground reaction forces and have realized adaptive posture changes
that ensure the stability of the robot. However, fast responses have not been
obtained because effective control is achieved by an integral feedback that
accompanies a time delay necessary for error accumulation. To improve this
response, here, we introduce gravity compensation in a feedforward manner. The
stationary state and its stability are analyzed based on dynamic equations, and the
robustness as well as the response is evaluated using computer simulations.
Finally, the adaptive behaviors of the robot are confirmed by standing experiments
on the slope.
C1 [Ito, Satoshi; Nishio, Shingo; Fukumoto, Yuuki; Matsushita, Kojiro; Sasaki,
Minoru] Gifu Univ, Fac Engn, 1-1 Yanagido, Gifu 5011193, Japan.
RP Ito, S (reprint author), Gifu Univ, Fac Engn, 1-1 Yanagido, Gifu 5011193, Japan.
EM satoshi@gifu-u.ac.jp
CR Alcaraz-Jimenez JJ, 2013, INT J ROBOT RES, V32, P1074, DOI
10.1177/0278364913487566
Andre J, 2015, J INTELL ROBOT SYST, V80, P625, DOI 10.1007/s10846-015-0196-0
Aoyama T, 2009, IEEE-ASME T MECH, V14, P712, DOI 10.1109/TMECH.2009.2032777
Beranek R., 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS 2011), P2261, DOI 10.1109/IROS.2011.6048567
Dang D., 2011, 2011 11 IEEE RAS INT, P676
Gawthrop P, 2014, BIOL CYBERN, V108, P159, DOI 10.1007/s00422-014-0587-5
Geng T, 2010, IEEE-ASME T MECH, V15, P253, DOI 10.1109/TMECH.2009.2024742
Goswami A, 1999, INT J ROBOT RES, V18, P523, DOI 10.1177/02783649922066376
HIRAI K, 1998, P IEEE INT C ROB AUT, V2, P1321
Huang Q, 2001, IEEE T ROBOTIC AUTOM, V17, P280, DOI 10.1109/70.938385
Ibanez A, 2013, NATURE INSPIRED MOBILE ROBOTICS, P519
Ito S., 2001, P ISHF2001, P515
Ito S., 2014, PRIV STAT DAT 2014 I, P1
Ito S, 2013, NATURE INSPIRED MOBILE ROBOTICS, P529
Ito S, 2008, J COMPUT, V3, P23, DOI 10.4304/jcp.3.8.23-31
Kagami S, 2002, AUTON ROBOT, V12, P71, DOI 10.1023/A:1013210909840
Kajita S, 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS 2010), P4489, DOI 10.1109/IROS.2010.5651082
Kim JY, 2007, J INTELL ROBOT SYST, V48, P457, DOI 10.1007/s10846-006-9107-8
Kim Y.-J., 2014, ROBOTICA, V34, P1
Luo X, 2011, J BIONIC ENG, V8, P33, DOI 10.1016/S1672-6529(11)60010-3
Nagasaka K., 1999, P 1999 IEEE INT C SY, V6
Nenchev DN, 2008, ROBOTICA, V26, P643, DOI 10.1017/S0263574708004268
Nishiwaki Koichi, 2011, 2011 11th IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2011), P745, DOI 10.1109/Humanoids.2011.6100831
Ott Christian, 2011, 2011 11th IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2011), P26, DOI 10.1109/Humanoids.2011.6100882
Prahlad V, 2008, ROBOTICA, V26, P9, DOI 10.1017/S0263574707003542
Sentis L, 2010, IEEE T ROBOT, V26, P483, DOI 10.1109/TRO.2010.2043757
Stephens BJ, 2010, IEEE INT C INT ROBOT, P1248, DOI 10.1109/IROS.2010.5648837
Sugihara T, 2010, IEEE INT CONF ROBOT, P4224, DOI 10.1109/ROBOT.2010.5509270
Takanishi A., 1990, IEEE INT WORKSH INT, P795
Vukobratovic M, 1990, BIPED LOCOMOTION DYN
Vukobratovic M, 2011, J INTELL ROBOT SYST, V63, P211, DOI 10.1007/s10846-010-
9525-5
Wu N, 2013, NATURE INSPIRED MOBILE ROBOTICS, P383
NR 32
TC 0
Z9 0
U1 5
U2 5
PU HINDAWI LTD
PI LONDON
PA ADAM HOUSE, 3RD FLR, 1 FITZROY SQ, LONDON, W1T 5HF, ENGLAND
SN 1176-2322
EI 1754-2103
J9 APPL BIONICS BIOMECH
JI Appl. Bionics Biomech.
PY 2017
AR 5980275
DI 10.1155/2017/5980275
PG 16
WC Engineering, Biomedical; Robotics
SC Engineering; Robotics
GA EV6WL
UT WOS:000401915200001
OA gold
DA 2018-01-22
ER

PT J
AU Yamamoto, K
AF Yamamoto, K.
TI Humanoid motion analysis and control based on COG viscoelasticity
SO ADVANCED ROBOTICS
LA English
DT Article
DE Whole-body control; joint viscoelasticity; inverted pendulum model; fall
detection
ID ROBOT MANIPULATORS; WALKING; GENERATION; SYSTEMS
AB This paper proposes a concept of center of gravity (COG) viscoelasticity to
associate joint viscoelasticity with the inverted pendulum model of humanoid
dynamics. Although COG viscoelasticity is based on the well-known kinematic
relationship between joint stiffness and end-effector stiffness, it provides
practical advantages for both analysis and control of humanoid motions. There are
two main contributions. The first is that the COG viscoelasticity allows us to
analyze fall risk. In a previous study, the author proposed a fall detection method
based on the maximal output admissible (MOA) set, which is computed from feedback
gain of the inverted pendulum model. The COG viscoelasticity associates joint
viscoelasticity with the feedback gain and allows us to compute the corresponding
MOA set when an arbitrary joint viscoelasticity is given. The second contribution
is that the COG viscoelasticity can be also utilized in motion control. After we
design a feedback gain in the inverted pendulum model utilizing the control theory,
the COG viscoelasticity can directly transform it to the joint viscoelasticity. The
validity of the COG viscoelasticity is verified with whole-body dynamics
simulations.
C1 [Yamamoto, K.] Univ Tokyo, Dept Mech Engn, Tokyo, Japan.
RP Yamamoto, K (reprint author), Univ Tokyo, Dept Mech Engn, Tokyo, Japan.
EM k.yamamoto@nml.t.u-tokyo.ac.jp
FU JSPS KAKENHI [16H05874]
FX This research is supported by JSPS KAKENHI [grant number 16H05874].
CR Albu-Schaffer A., 2003, P IEEE INT C ROB AUT, V3, P3704
ATHANS M, 1967, INFORM CONTROL, V11, P592, DOI 10.1016/S0019-9958(67)90803-0
GILBERT EG, 1991, IEEE T AUTOMAT CONTR, V36, P1008, DOI 10.1109/9.83532
Harada K, 2006, INT J HUM ROBOT, V3, P1, DOI 10.1142/S0219843606000643
Hatanaka T, 2008, AUTOMATICA, V44, P479, DOI 10.1016/j.automatica.2007.03.031
Herdt A, 2010, ADV ROBOTICS, V24, P719, DOI 10.1163/016918610X493552
Hu Y, 2014, IEEE-RAS INT C HUMAN, P881, DOI 10.1109/HUMANOIDS.2014.7041468
Kajita S, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1644
Kajita S, 2001, IEEE INT CONF ROBOT, P2299, DOI 10.1109/ROBOT.2001.932965
KHATIB O, 1987, IEEE T ROBOTIC AUTOM, V3, P43, DOI 10.1109/JRA.1987.1087068
Koolen T, 2012, INT J ROBOT RES, V31, P1094, DOI 10.1177/0278364912452673
Mitobe K, 2000, ROBOTICA, V18, P651, DOI 10.1017/S0263574700002708
Myklebust J, 1991, P ANN INT C IEEE ENG, V13, P863
NAKAMURA Y, 1987, INT J ROBOT RES, V6, P3, DOI 10.1177/027836498700600201
Nakamura Y, 2000, IEEE T ROBOTIC AUTOM, V16, P124, DOI 10.1109/70.843167
Pratt J, 2006, IEEE-RAS INT C HUMAN, P200, DOI 10.1109/ICHR.2006.321385
Sadeghian H, 2014, IEEE T ROBOT, V30, P493, DOI 10.1109/TRO.2013.2291630
Salisbury J. K., 1980, Proceedings of the 19th IEEE Conference on Decision &
Control Including the Symposium on Adaptive Processes, P95
Sentis L, 2010, IEEE T ROBOT, V26, P483, DOI 10.1109/TRO.2010.2043757
Stephens B, 2007, IEEE-RAS INT C HUMAN, P589, DOI 10.1109/ICHR.2007.4813931
Sugihara T, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2575
Sugihara T., 2009, P 2009 IEEE INT C RO, P1966
Sugihara T, 2008, ROBOT AUTON SYST, V56, P82, DOI 10.1016/j.robot.2007.09.012
TSUJI T, 1988, PROCEEDINGS OF THE ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE
ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY, PTS 1-4, P624, DOI
10.1109/IEMBS.1988.94795
Tsuji T., 1990, Transactions of the Society of Instrument and Control Engineers,
V26, P557
Tsuji T., 1996, SYSTEMS MAN CYBERN B, V26, P707
Venture G, 2009, INT J ROBOT RES, V28, P1322, DOI 10.1177/0278364909103786
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
WALKER M. W., 1982, ASME J DYNAMIC SYSTE, V104, P205
Wensing PM, 2013, IEEE INT CONF ROBOT, P3103, DOI 10.1109/ICRA.2013.6631008
Yamamoto K, 2016, ROBOT AUTON SYST, V81, P17, DOI 10.1016/j.robot.2016.03.010
NR 31
TC 1
Z9 1
U1 0
U2 0
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2017
VL 31
IS 7
BP 341
EP 354
DI 10.1080/01691864.2016.1270853
PG 14
WC Robotics
SC Robotics
GA ET1LP
UT WOS:000400031200001
DA 2018-01-22
ER

PT J
AU Cui, YD
Matsubara, T
Sugimoto, K
AF Cui, Yunduan
Matsubara, Takamitsu
Sugimoto, Kenji
TI Pneumatic artificial muscle-driven robot control using local update
reinforcement learning
SO ADVANCED ROBOTICS
LA English
DT Article
DE Smooth policy update; dynamic policy programming; robot motor learning
ID SEARCH
AB In this study, a new value function based Reinforcement learning (RL) algorithm,
Local Update Dynamic Policy Programming (LUDPP), is proposed. It exploits the
nature of smooth policy update using Kullback-Leibler divergence to update its
value function locally and considerably reduces the computational complexity. We
firstly investigated the learning performance of LUDPP and other algorithms without
smooth policy update for tasks of pendulum swing up and n DOFs manipulator reaching
in simulation. Only LUDPP could efficiently and stably learn good control policies
in high dimensional systems with limited number of training samples. In real word
application, we applied LUDPP to control Pneumatic Artificial Muscles (PAMs) driven
robots without the knowledge of model which is challenging for traditional methods
due to the high nonlinearities of PAM's air pressure dynamics and mechanical
structure. LUDPP successfully achieved one finger control of Shadow Dexterous Hand,
a PAM-driven humanoid robot hand, with far lower computational resource compared
with other conventional value function based RL algorithms.
C1 [Cui, Yunduan; Matsubara, Takamitsu; Sugimoto, Kenji] Nara Inst Sci & Technol,
Grad Sch Informat Sci, Nara, Japan.
RP Cui, YD (reprint author), Nara Inst Sci & Technol, Grad Sch Informat Sci, Nara,
Japan.
EM cui.yunduan.ck4@is.naist.jp
CR Andoni A., 2009, THESIS
Atkeson CG, 1997, ARTIF INTELL REV, V11, P11, DOI 10.1023/A:1006559212014
Atkinson K E, 2008, INTRO NUMERICAL ANAL
Azar MG, 2011, P INT C ART INT STAT, P119
Azar MG, 2012, J MACH LEARN RES, V13, P3207
BENTLEY JL, 1975, COMMUN ACM, V18, P509, DOI 10.1145/361002.361007
Busoniu L, 2010, P AMER CONTR CONF, P486
Chou CP, 1996, IEEE T ROBOTIC AUTOM, V12, P90, DOI 10.1109/70.481753
Coelho LD, 2010, EXPERT SYST APPL, V37, P499, DOI 10.1016/j.eswa.2009.05.042
Cui Y, 2015, IEEE-RAS INT C HUMAN, P1083, DOI 10.1109/HUMANOIDS.2015.7363503
Daerden F., 2002, EUR J MECH ENV ENG, V47, P11
Datar M., 2004, P 20 ANN S COMP GEOM, P253, DOI DOI 10.1145/997817.997857
Deisenroth MP, 2014, IEEE INT CONF ROBOT, P3876, DOI 10.1109/ICRA.2014.6907421
Farahmand A. M., 2010, P ADV NEUR INF PROC, P568
Fliess M, 2013, INT J CONTROL, V86, P2228, DOI 10.1080/00207179.2013.810345
Gedouin PA, 2011, CONTROL ENG PRACT, V19, P433, DOI
10.1016/j.conengprac.2011.01.005
Kogiso K, 2012, IEEE INT C INT ROBOT, P3714, DOI 10.1109/IROS.2012.6385778
Lagoudakis MG, 2004, J MACH LEARN RES, V4, P1107, DOI 10.1162/1532443041827907
LI L, 2009, P 8 INT JOINT C AUT, V2, P733
Ljspeert AJ, 2008, NEURAL NETWORKS, V21, P642
Milanes V, 2012, IEEE T IND ELECTRON, V59, P620, DOI 10.1109/TIE.2011.2148673
Peters J, 2008, NEUROCOMPUTING, V71, P1180, DOI 10.1016/j.neucom.2007.11.026
Peters J, 2010, PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL
INTELLIGENCE (AAAI-10), P1607
SAWICKI GS, 2005, P 2005 IEEE INT C RE, P206
Schaal S, 2006, ADAPTIVE MOTION OF ANIMALS AND MACHINES, P261, DOI 10.1007/4-
431-31381-8_23
Schaal S, 2010, ROB AUTOM MAG IEEE, V17, P20
Shen Y, 2006, P ADV NEUR INF PROC, P1225
Sutton RS, 1996, ADV NEUR IN, V8, P1038
Sutton RS, 1998, REINFORCEMENT LEARNI
Theodorou EA, 2010, J MACH LEARN RES, V11, P3137
Todorov E., 2006, P ADV NEUR INF PROC, P1369
Tsagarakis NG, 2003, AUTON ROBOT, V15, P21, DOI 10.1023/A:1024484615192
Walker R., 2013, SHADOW DEXTROUS HAND
WATKINS CJCH, 1992, MACH LEARN, V8, P279, DOI 10.1023/A:1022676722315
NR 34
TC 1
Z9 1
U1 14
U2 15
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2017
VL 31
IS 8
BP 397
EP 412
DI 10.1080/01691864.2016.1274680
PG 16
WC Robotics
SC Robotics
GA ET1LY
UT WOS:000400032100001
DA 2018-01-22
ER

PT J
AU Ogata, K
Umino, T
Nakayama, T
Ono, E
Tsuji, T
AF Ogata, Kunihiro
Umino, Tokio
Nakayama, Tsuyoshi
Ono, Eiichi
Tsuji, Toshiaki
TI Dummy humanoid robot simulating several trunk postures and abdominal
shapes - Report of element technologies
SO ADVANCED ROBOTICS
LA English
DT Article
DE Humanoid robot; tactile sensor; spinal cord injury; clothing pressure
ID INDIVIDUALS
AB Development of clothing in consideration of the shape and body function of a
person with spinal cord injury is an important task. Then, a dummy robot with a
deformation mechanism was developed in this study for evaluating the comfortable
level of clothings. Specifically, a trunk joint mechanism and an abdominal
mechanism that can realize various deformations of the abdominal area and various
trunk poses were developed. The trunk joint mechanism was implemented in order to
simulate the seated posture of persons with spinal cord injury. The abdominal
deformation mechanism was implemented using linear actuators and rotating
servomotors in order to simulate abdominal obesity of persons with spinal cord
injury. Further, a tactile sensor system was developed for measuring the clothing
pressure on the abdominal area and evaluating the comfort or discomfort of
clothing.
C1 [Ogata, Kunihiro; Tsuji, Toshiaki] Saitama Univ, Grad Sch Sci & Engn, Saitama,
Japan.
[Umino, Tokio] Tokyo Denki Univ, Grad Sch Elect & Elect Engn, Tokyo, Japan.
[Nakayama, Tsuyoshi; Ono, Eiichi] Natl Rehabil Ctr Persons Disabil, Res Inst,
Tokorozawa, Saitama, Japan.
[Ogata, Kunihiro] Natl Inst Adv Ind Sci & Technol, Robot Innovat Res Ctr,
Tsukuba, Ibaraki, Japan.
[Umino, Tokio] Honda Res & Dev Co Ltd, Wako, Saitama, Japan.
RP Ogata, K (reprint author), Saitama Univ, Grad Sch Sci & Engn, Saitama, Japan.
EM ogata.kunihiro@aist.go.jp
OI Tsuji, Toshiaki/0000-0002-4532-4514
FU JSPS KAKENHI [25282181]
FX This work was supported by JSPS KAKENHI [grant number 25282181].
CR Abels A., 2013, J AUTOMATION CONTROL, V1, P132
Defloor T, 1999, APPL NURS RES, V12, P136, DOI 10.1016/S0897-1897(99)80045-7
Hayashi M, 2008, DEV HUMANOID DIS TRI, DOI [10.5772/6629, DOI 10.5772/6629]
Hetz SP, 2009, SPINAL CORD, V47, P550, DOI 10.1038/sc.2008.160
Howe I., 2010, FASHIONING IDENTITY
Imamura Y, 2014, 2014 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION
(SII), P761, DOI 10.1109/SII.2014.7028134
Ishii H, 2013, T JAPANESE SOC MED B, V51, pM
Jongjit Jithathai, 2004, Southeast Asian Journal of Tropical Medicine and Public
Health, V35, P980
Li HY, 2009, PROCEEDINGS OF THE FIBER SOCIETY 2009 SPRING CONFERENCE, VOLS I AND
II, P1089
Maggiali M, 2008, 11 MECH FOR BIENN IN, P1
Maruyama Y, 2008, SPINAL CORD, V46, P494, DOI 10.1038/sj.sc.3102171
Ohmura Y., 2007, IEEE RSJ INT C INT R, P1136
Ohmura Y, 2006, IEEE INT CONF ROBOT, P1348, DOI 10.1109/ROBOT.2006.1641896
Sumiya T, 1997, SPINAL CORD, V35, P595, DOI 10.1038/sj.sc.3100467
[Anonymous], 2011, OFFICIAL CURRICULUM
NR 15
TC 0
Z9 0
U1 4
U2 4
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2017
VL 31
IS 6
BP 303
EP 310
DI 10.1080/01691864.2016.1269671
PG 8
WC Robotics
SC Robotics
GA ES5KO
UT WOS:000399577800002
DA 2018-01-22
ER

PT J
AU Murooka, M
Ueda, R
Nozawa, S
Kakiuchi, Y
Okada, K
Inaba, M
AF Murooka, Masaki
Ueda, Ryohei
Nozawa, Shunichi
Kakiuchi, Yohei
Okada, Kei
Inaba, Masayuki
TI Global planning of whole-body manipulation by humanoid robot based on
transition graph of object motion and contact switching
SO ADVANCED ROBOTICS
LA English
DT Article
DE Whole-body manipulation; manipulation planning; humanoid robot; graph
search; quadratic programming
AB Humanoid robot has the potential to achieve various manipulation strategies such
as lifting, pushing, pivoting and rolling. Deciding which strategies to apply for
the object manipulation is difficult because the appropriate manipulation motion
varies greatly depending on the object property. We propose the planning method of
whole-body manipulation for various manipulati on strategies. In order to plan the
manipulation in the global scope, we generate the graph representing all the
possible object poses and operation contacts. By searching the feasible path in the
graph, we plan the sequence of object states satisfying the kinematics and dynamics
conditions. Finally, the robot postures for the whole-body manipulation are
generated from the object states. The proposed method is efficient because the
tractable object configuration is first planned in the global scope and then the
complicated robot configuration is calculated. We show the effectiveness of the
proposed method by planning the humanoid robot motion for various manipulation
strategies automatically from the object and robot models.
C1 [Murooka, Masaki; Ueda, Ryohei; Nozawa, Shunichi; Kakiuchi, Yohei; Okada, Kei;
Inaba, Masayuki] Univ Tokyo, Grad Sch Informat Sci & Technol, Tokyo, Japan.
RP Murooka, M (reprint author), Univ Tokyo, Grad Sch Informat Sci & Technol, Tokyo,
Japan.
EM murooka@jsk.imi.i.u-tokyo.ac.jp
FU [15J07023]
FX This work was supported by Grant-in-Aid for JSPS Research Fellow Grant
Number 15J07023.
CR Aiyama Y, 1993, P IEEE RSJ INT C INT, V1, P136
Berenson D, 2011, INT J ROBOT RES, V30, P1435, DOI 10.1177/0278364910396389
Bouyarmane K, 2012, ADV ROBOTICS, V26, P1099, DOI 10.1080/01691864.2012.686345
Burget F, 2013, IEEE INT CONF ROBOT, P1656, DOI 10.1109/ICRA.2013.6630792
Dalibard S, 2013, INT J ROBOT RES, V32, P1089, DOI 10.1177/0278364913481250
Ferreau HJ, 2014, MATH PROGRAM COMPUT, V6, P327, DOI 10.1007/s12532-014-0071-1
Harada K, 2005, IEEE INT CONF ROBOT, P1712
Harada K, 2002, IEEE T ROBOTIC AUTOM, V18, P597, DOI 10.1109/TRA.2002.802167
Larsen E., 2000, P IEEE INT C ROB AUT, V4, P3719
Lien JM, 2008, COMPUT AIDED GEOM D, V25, P503, DOI 10.1016/j.cagd.2008.05.003
Matsui T, 1994, IEEE 6 S PAR DISTR P
Mittendorfer P, 2015, ADV ROBOTICS, V29, P51, DOI 10.1080/01691864.2014.952493
Mukai T, 2011, P 2011 IEEE INT C RO, P5435
Murooka M, 2014, P 2014 IEEE INT C AU, P263
Murooka M, 2014, J ROB SOC JPN, V32, P595
Murooka M, 2014, IEEE INT CONF ROBOT, P3425, DOI 10.1109/ICRA.2014.6907352
Ohmura Y., 2007, P IEEE RSJ INT C INT, P1136
Simeon T, 2004, INT J ROBOT RES, V23, P729, DOI 10.1177/0278364904045471
Stilman M, 2008, IEEE INT CONF ROBOT, P3175, DOI 10.1109/ROBOT.2008.4543694
Takubo T, 2005, P 2005 IEEE INT C RO, P1718
Trinkle J., 1991, P IEEE INT C ROB AUT, V2, P1245
Vahrenkamp N, 2013, IEEE INT CONF ROBOT, P1970, DOI 10.1109/ICRA.2013.6630839
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
Yamawaki T, 2013, IEEE INT C INT ROBOT, P2485, DOI 10.1109/IROS.2013.6696706
Yoshida E, 2010, AUTON ROBOT, V28, P77, DOI 10.1007/s10514-009-9143-x
NR 25
TC 0
Z9 0
U1 3
U2 3
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2017
VL 31
IS 6
BP 322
EP 340
DI 10.1080/01691864.2016.1266965
PG 19
WC Robotics
SC Robotics
GA ES5KO
UT WOS:000399577800004
DA 2018-01-22
ER

PT J
AU Pham, QC
Caron, S
Lertkultanon, P
Nakamura, Y
AF Quang-Cuong Pham
Caron, Stephane
Lertkultanon, Puttichai
Nakamura, Yoshihiko
TI Admissible velocity propagation: Beyond quasi-static path planning for
high-dimensional robots
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE Dynamics; kinematics; motion planning
ID HUMANOID ROBOTS; SPECIFIED PATHS; MANIPULATORS; PARAMETERIZATION;
ALGORITHMS; CONTACT; SYSTEMS
AB Path-velocity decomposition is an intuitive yet powerful approach to addressing
the complexity of kinodynamic motion planning. The difficult trajectory planning
problem is solved in two separate, simpler steps: first, a path is found in the
configuration space that satisfies the geometric constraints (path planning), and
second, a time-parameterization of that path satisfying the kinodynamic constraints
is found. A fundamental requirement is that the path found in the first step must
be time-parameterizable. Most existing works fulfill this requirement by enforcing
quasi-static constraints during the path planning step, resulting in an important
loss in completeness. We propose a method that enables path-velocity decomposition
to discover truly dynamic motions, i.e. motions that are not quasi-statically
executable. At the heart of the proposed method is a new algorithm - Admissible
Velocity Propagation - which, given a path and an interval of reachable velocities
at the beginning of that path, computes exactly and efficiently the interval of all
the velocities the system can reach after traversing the path, while respecting the
system's kinodynamic constraints. Combining this algorithm with usual sampling-
based planners then gives rise to a family of new trajectory planners that can
appropriately handle kinodynamic constraints while retaining the advantages
associated with path-velocity decomposition. We demonstrate the efficiency of the
proposed method on some difficult kinodynamic planning problems, where, in
particular, quasi-static methods are guaranteed to fail.
C1 [Quang-Cuong Pham; Lertkultanon, Puttichai] Nanyang Technol Univ, Sch Mech &
Aerosp Engn, 50 Nanyang Ave, Singapore 639798, Singapore.
[Caron, Stephane] Univ Montpellier, CNRS, LIRMM, Montpellier, France.
[Nakamura, Yoshihiko] Univ Tokyo, Dept Mechanoinformat, Tokyo, Japan.
RP Pham, QC (reprint author), Nanyang Technol Univ, Sch Mech & Aerosp Engn, 50
Nanyang Ave, Singapore 639798, Singapore.
EM cuong.pham@normalesup.org
RI Pham, Quang-Cuong/G-5050-2012; Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102
FU JSPS postdoctoral fellowship; NTU, Singapore; MOE, Singapore [RG109/14]
FX The author(s) disclosed receipt of the following financial support for
the research, authorship, and/or publication of this article: This work
was supported by a JSPS postdoctoral fellowship, by a Start-Up Grant
from NTU, Singapore, and by a Tier 1 grant no RG109/14 from MOE,
Singapore.
CR BOBROW JE, 1985, INT J ROBOT RES, V4, P3, DOI 10.1177/027836498500400301
BOBROW JE, 1988, IEEE T ROBOTIC AUTOM, V4, P443, DOI 10.1109/56.811
Bullo F, 2001, IEEE T ROBOTIC AUTOM, V17, P402, DOI 10.1109/70.954753
Caron S, 2014, IEEE INT CONF ROBOT, P5818, DOI 10.1109/ICRA.2014.6907714
Diankov R, 2010, THESIS
DONALD B, 1993, J ACM, V40, P1048, DOI 10.1145/174147.174150
Escande A, 2013, ROBOT AUTON SYST, V61, P428, DOI 10.1016/j.robot.2013.01.008
Geraerts R, 2007, INT J ROBOT RES, V26, P845, DOI 10.1177/0278364907079280
Hauser K, 2014, INT J ROBOT RES, V33, P1231, DOI 10.1177/0278364914527855
Hauser K, 2008, INT J ROBOT RES, V27, P1325, DOI 10.1177/0278364908098447
Hsu D, 2003, IEEE INT CONF ROBOT, P4420
Hsu D, 2002, INT J ROBOT RES, V21, P233, DOI 10.1177/027836402320556421
Johnson J, 2012, IEEE INT CONF ROBOT, P2035, DOI 10.1109/ICRA.2012.6225233
Kalakrishnan M, 2011, IEEE INT C ROB AUT, P4569
KANT K, 1986, INT J ROBOT RES, V5, P72, DOI 10.1177/027836498600500304
Karaman S, 2011, INT J ROBOT RES, V30, P846, DOI 10.1177/0278364911406761
Kavraki LE, 1996, IEEE T ROBOTIC AUTOM, V12, P566, DOI 10.1109/70.508439
Kuffner JJ, 2002, AUTON ROBOT, V12, P105, DOI 10.1023/A:1013219111657
Kuffner JJ, 2000, P 2000 IEEE INT C RO, V2, P995, DOI DOI
10.1109/R0B0T.2000.844730
Kunz T, 2015, SPRINGER TRAC ADV RO, V107, P233, DOI 10.1007/978-3-319-16595-0_14
Laumond J. P., 1998, ROBOT MOTION PLANNIN
LaValle SM, INT J ROBOTICS RES, V20, P378
Lertkultanon P, 2014, I C CONT AUTOMAT ROB, P1392, DOI
10.1109/ICARCV.2014.7064519
Li Y, 2015, ALGORITHMIC FDN ROBO, P263
Mordatch I, 2012, ACM T GRAPHIC, V31, DOI 10.1145/2185520.2185539
NAKAMURA Y, 1991, IEEE T ROBOTIC AUTOM, V7, P500, DOI 10.1109/70.86080
Papadopoulos G., 2014, ARXIV14052872
Peng JF, 2005, INT J ROBOT RES, V24, P295, DOI 10.1177/0278364905051974
Perez A, 2012, IEEE INT CONF ROBOT, P2537, DOI 10.1109/ICRA.2012.6225177
Pham Q-C, 2013, ROBOTICS SCI SYSTEM
Pham Q-C, 2012, 2 IFTOMM ASIAN C MEC
Posa M, 2013, ALGORITHMIC FDN ROBO, P527
Pham QC, 2015, IEEE-ASME T MECH, V20, P3257, DOI 10.1109/TMECH.2015.2409479
Pham QC, 2014, IEEE T ROBOT, V30, P1533, DOI 10.1109/TRO.2014.2351113
Ratliff N, 2009, IEEE INT C ROB AUT, P489
SHILLER Z, 1992, J DYN SYST-T ASME, V114, P34, DOI 10.1115/1.2896505
SHILLER Z, 1991, IEEE T ROBOTIC AUTOM, V7, P241, DOI 10.1109/70.75906
Shiller Z, 1985, IEEE T ROBOTIC AUTOM, P614
SHIN KG, 1986, IEEE T AUTOMAT CONTR, V31, P501
Shkolnik A, 2009, IEEE INT C ROB AUT, P2859
Simeon T, 2002, IEEE T ROBOTIC AUTOM, V18, P42, DOI 10.1109/70.988973
SLOTINE JJE, 1989, IEEE T ROBOTIC AUTOM, V5, P118, DOI 10.1109/70.88024
Sucan IA, 2012, IEEE ROBOT AUTOM MAG, V19, P72, DOI 10.1109/MRA.2012.2205651
Sucan IA, 2012, IEEE T ROBOT, V28, P116, DOI 10.1109/TRO.2011.2160466
Suleiman W, 2010, IEEE T ROBOT, V26, P458, DOI 10.1109/TRO.2010.2047531
Tedrake R., 2009, P ROB SCI SYST SEATT
Verscheure D, 2009, IEEE T AUTOMAT CONTR, V54, P2318, DOI
10.1109/TAC.2009.2028959
Vukobratovic M, 2001, IEEE RAS INT C HUM R, P237
Zlajpah L, 1996, IEEE INT CONF ROBOT, P1572, DOI 10.1109/ROBOT.1996.506928
NR 49
TC 1
Z9 2
U1 2
U2 3
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD JAN
PY 2017
VL 36
IS 1
BP 44
EP 67
DI 10.1177/0278364916675419
PG 24
WC Robotics
SC Robotics
GA EL7AU
UT WOS:000394774300004
DA 2018-01-22
ER

PT J
AU Liu, P
Glas, DF
Kanda, T
Ishiguro, H
Hagita, N
AF Liu, Phoebe
Glas, Dylan F.
Kanda, Takayuki
Ishiguro, Hiroshi
Hagita, Norihiro
TI A Model for Generating Socially-Appropriate Deictic Behaviors Towards
People
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Human-robot interaction; Social robots; Pointing gestures; Deictic
behavior
ID VISUAL-SEARCH; MUSEUM; SPEED; ROBOT
AB Pointing behaviors are essential in enabling social robots to communicate about
a particular object, person, or space. Yet, pointing to a person can be considered
rude in many cultures, and as robots collaborate with humans in increasingly
diverse environments, they will need to effectively refer to people in a socially-
appropriate way. We confirmed in an empirical study that although people would
point precisely to an object to indicate where it is, they were reluctant to do so
when pointing to another person. We propose a model for selecting utterances and
pointing behaviors towards people in terms of a balance between understandability
and social appropriateness. Calibrating our proposed model based on empirical human
behavior, we developed a system able to autonomously select among six deictic
behaviors and execute them on a humanoid robot. We evaluated the system in an
experiment in a shopping mall, and the results show that the robot's deictic
behavior was perceived by both the listener and the referent as more polite, more
natural, and better overall when using our model, as compared with a model
considering understandability alone.
C1 [Liu, Phoebe; Glas, Dylan F.; Kanda, Takayuki; Ishiguro, Hiroshi; Hagita,
Norihiro] Adv Telecommun Res Inst Int, Kyoto, Japan.
RP Liu, P (reprint author), Adv Telecommun Res Inst Int, Kyoto, Japan.
EM phoebe@atr.jp; dylan@atr.jp; kanda@atr.jp;
ishiguro@sys.es.osaka-u.ac.jp; hagita@atr.jp
OI Kanda, Takayuki/0000-0002-9546-5825; Glas, Dylan/0000-0002-2071-1219
FU Ministry of Internal Affairs and Communications of Japan; JSPS KAKENHI
[25240042]
FX We would like to thank Satoshi Koizumi for facilitating the smooth
operation of the experiments. This study was funded in part by the
Ministry of Internal Affairs and Communications of Japan and in part by
JSPS KAKENHI Grant Number 25240042.
CR Bangerter A., 2007, MOG 2007 WORKSH MULT, P17
Bennewitz M, 2005, IEEE-RAS INT C HUMAN, P418
Berns K, 2010, 2010 ADVANCED TECHNOLOGIES FOR ENHANCING QUALITY OF LIFE (AT-
EQUAL), P121, DOI 10.1109/ATEQUAL.2010.30
Brooks A., 2006, P 1 ACM SIGCHI SIGAR, P297
BURGOON JK, 1984, HUM COMMUN RES, V10, P351, DOI 10.1111/j.1468-
2958.1984.tb00023.x
EDELMANN RJ, 1982, AUST J PSYCHOL, V34, P359, DOI 10.1080/00049538208254730
Glas DF, 2012, LASER BASED TRACKING, P158
Glas DF, 2009, ADV ROBOTICS, V23, P405, DOI 10.1163/156855309X408754
Hato Y, 2010, 2010 5 ACM IEEE INT, P301
Haywood SL, 2005, PSYCHOL SCI, V16, P362, DOI 10.1111/j.0956-7976.2005.01541.x
Holladay RM, 2014, IEEE ROMAN, P217, DOI 10.1109/ROMAN.2014.6926256
Ishii S, 1973, J COMMUN ASS PACIFIC, V2, P43
Kanda T, 2007, IEEE T ROBOT, V23, P962, DOI 10.1109/TRO.2007.904904
Kendon A., 2004, GESTURE VISIBLE ACTI
Kranstedt A, 2006, LECT NOTES ARTIF INT, V3881, P300
Kuhnlein P, 2003, 20033 SEB, V360
Kumari P, 2015, IEEE SENS J, V15, P4950, DOI 10.1109/JSEN.2015.2423152
Liu P, 2013, ACMIEEE INT CONF HUM, P267, DOI 10.1109/HRI.2013.6483598
Muller Cornelia, 2004, SEMANTICS PRAGMATICS, P234
Nieuwenhuisen M, 2013, INT J SOC ROBOT, V5, P549, DOI 10.1007/s12369-013-0206-y
Paraboni I, 2007, COMPUT LINGUIST, V33, P229, DOI 10.1162/coli.2007.33.2.229
Richie D, 1998, LATERAL VIEW ESSAYS
Sabelli A.M., 2011, 2011 6 ACM IEEE INT, P37
Sakurai S, 2007, 2007 IEEE ASME INT C, P1
Salem M, 2012, INT J SOC ROBOT, V4, P201, DOI 10.1007/s12369-011-0124-9
Schmidt J, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P3804, DOI
10.1109/IROS.2008.4650649
Schultz A. C., 2005, Interactions, V12, P22, DOI 10.1145/1052438.1052456
Semwal VB, 2015, ROBOT AUTON SYST, V70, P181, DOI 10.1016/j.robot.2015.02.009
Shiomi M, 2007, IEEE INTELL SYST, V22, P25, DOI 10.1109/MIS.2007.37
Spexard T, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-12, P934, DOI 10.1109/IROS.2006.281770
STERNBERG S, 1966, SCIENCE, V153, P652, DOI 10.1126/science.153.3736.652
Sugiyama O, 2006, J CONN SCI, V18, P379
TREISMAN AM, 1980, COGNITIVE PSYCHOL, V12, P97, DOI 10.1016/0010-0285(80)90005-5
TROUT DL, 1980, J NONVERBAL BEHAV, V4, P176, DOI 10.1007/BF00986818
Vaish A, 2014, ADV INTELL SYST, V236, P1467, DOI 10.1007/978-81-322-1602-5_147
Van Der Sluis I., 2001, LANGUAGE COMPUTERS, V37, P158
WANG QQ, 1994, PERCEPT PSYCHOPHYS, V56, P495, DOI 10.3758/BF03206946
Wolfe JM, 2004, VISION RES, V44, P1411, DOI 10.1016/j.visres.2003.11.024
WOLFE JM, 1994, PSYCHON B REV, V1, P202, DOI 10.3758/BF03200774
NR 39
TC 0
Z9 0
U1 0
U2 1
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD JAN
PY 2017
VL 9
IS 1
BP 33
EP 49
DI 10.1007/s12369-016-0348-9
PG 17
WC Robotics
SC Robotics
GA EK9ZZ
UT WOS:000394283800004
DA 2018-01-22
ER

PT J
AU Nakagawa, Y
Nakagawa, N
AF Nakagawa, Yuki
Nakagawa, Noriaki
TI Relationship Between Human and Robot in Nonverbal Communication
SO JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT
INFORMATICS
LA English
DT Article
DE nonverbal communication; humanoid robot; robot arm; work with robots;
life with robots
AB The function of the robot living together converges on the problem of
communication. We focus on nonverbal communication and relationship between service
robots and human. We show nonverbal communication experiments of robot to build
relationship between human. We described some ideal fiction robots to live with. We
got experience such as; Relationship based on touch communication is managed three
elements, appearance, motion and predictable behavior. As the result, these
elements are based on embodiment. Human touches robot after feeling safe and
natural motion. Motivation to build relationship with robots is decided above three
elements in physically but appearance and motion are important. Evaluation of
relationship is complicated because relationship grows up depending on spending
time and motivation to relate. These experiences were shown by life sized humanoid
robot and robot arm in exhibition. Based on these results, evaluation method to
understand relationship between robot and human are considered in near future robot
development.
C1 [Nakagawa, Yuki; Nakagawa, Noriaki] RT Corp, Chiyoda Ku, 3-2-13 Sotokanda, Tokyo
1010021, Japan.
RP Nakagawa, Y (reprint author), RT Corp, Chiyoda Ku, 3-2-13 Sotokanda, Tokyo
1010021, Japan.
EM dev@rt-net.jp
CR Fujio Fujiko. F., 1974, DORAEMON, V6, P45
Hirama S., 2016, NIKKEI LINUX
Shiomi M., 2005, INTERACTION, P127
Sonoyama T., 2007, MAINICHI COMMUNICATI
NR 4
TC 0
Z9 0
U1 2
U2 4
PU FUJI TECHNOLOGY PRESS LTD
PI TOKYO
PA 4F TORANOMON SANGYO BLDG, 2-29, TORANOMON 1-CHOME, MINATO-KU, TOKYO,
105-0001, JAPAN
SN 1343-0130
EI 1883-8014
J9 J ADV COMPUT INTELL
JI J. Adv. Comput. Intell. Inform.
PD JAN
PY 2017
VL 21
IS 1
BP 20
EP 24
PG 5
WC Computer Science, Artificial Intelligence
SC Computer Science
GA EJ8IT
UT WOS:000393469700003
DA 2018-01-22
ER

PT J
AU Mamiya, S
Sano, S
Uchiyama, N
AF Mamiya, Shotaro
Sano, Shigenori
Uchiyama, Naoki
TI Foot Structure with Divided Flat Soles and Springs for Legged Robots and
Experimental Verification
SO JOURNAL OF ROBOTICS AND MECHATRONICS
LA English
DT Article
DE biped walking; foot structure with spring; reaction measurement; landing
control
ID HUMANOID ROBOT
AB Practical ambulation must be realized by walking robots to enable social and
industrial support by walking robots in human living environments. A four-legged
robot that walks through rough terrain effectively does not erase the fact that
most legged robots - particularly biped robots - have difficulty negotiating rough
terrain. We focus below on a foot structure and landing control for enabling any
type of legged robot to walk through rough terrain. When a walking robot lands on
the ground, it is difficult to detect the detailed geometry and dynamic properties
of the ground surface. The new foot structure we propose adapts to ground surfaces
that have different geometries and hardness. The foot has four-part flat soles. The
landing controller we apply to a robot with our proposed foot structure increases
the stability of contact with the ground. We verify the effectiveness of our
proposed foot structure in experiments.
C1 [Mamiya, Shotaro; Sano, Shigenori; Uchiyama, Naoki] Toyohashi Univ Technol, Dept
Mech Engn, Tempaku Ku, 1-1 Hibarigaoka, Toyohashi, Aichi 4418580, Japan.
RP Uchiyama, N (reprint author), Toyohashi Univ Technol, Dept Mech Engn, Tempaku
Ku, 1-1 Hibarigaoka, Toyohashi, Aichi 4418580, Japan.
EM uchiyama@tut.jp
CR Hashimoto K., 2005, P 23 ANN C ROB SOC J
HASHIMOTO K, 2007, ROBOTICS SOC JAPAN, V25, P851
Hashimoto K, 2007, IEEE INT CONF ROBOT, P1869, DOI 10.1109/ROBOT.2007.363594
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Mamiya S., 2014, P 13 IEEE INT WORKSH
Nakajima Shuro, 2008, J ROBOTICS MECHATRON, V20, P913
Nakano E., 2004, ADV MOBILE ROBOTICS, P153
Park J, 2007, IEEE INT CONF ROBOT, P2682, DOI 10.1109/ROBOT.2007.363870
Sano S, 2008, AMC '08: 10TH INTERNATIONAL WORKSHOP ON ADVANCED MOTION CONTROL,
VOLS 1 AND 2, PROCEEDINGS, P480
Sato T., 2010, P 36 ANN C IEEE IND, P1639
Shoji M., 2001, J SOC BIOMECHANISMS, V25, P36
Terashima K., 2012, CONTROL ENG, P102
YAMADA M, 2011, T JAPAN SOC MECH E C, V77, P2734
Yamada M., 2010, P IEEE ASME INT C AD, P806, DOI DOI 10.1109/AIM.2010.5695760
Yamaguchi J., 1996, J ROBOTICS SOC JAPAN, V14, P546
NR 17
TC 0
Z9 0
U1 3
U2 4
PU FUJI TECHNOLOGY PRESS LTD
PI TOKYO
PA 4F TORANOMON SANGYO BLDG, 2-29, TORANOMON 1-CHOME, MINATO-KU, TOKYO,
105-0001, JAPAN
SN 0915-3942
EI 1883-8049
J9 J ROBOT MECHATRON
JI J. Robot. Mechatron.
PD DEC
PY 2016
VL 28
IS 6
BP 799
EP 807
DI 10.20965/jrm.2016.p0799
PG 9
WC Robotics
SC Robotics
GA EJ8ED
UT WOS:000393457000003
DA 2018-01-22
ER

PT J
AU Khummongkol, R
Yokota, M
AF Khummongkol, Rojanee
Yokota, Masao
TI Computer simulation of human-robot interaction through natural language
SO ARTIFICIAL LIFE AND ROBOTICS
LA English
DT Article
DE Human-robot interaction; Natural language understanding; Problem finding
and solving
AB Human-robot interaction in natural (or human) language is simulated as
intelligent management of English conversation between a humanoid robot and several
kinds of people on imagination. The robot is destined to help a disabled old man by
comprehending his intention through dialogue and its final response to his
intention is animated graphically. When the robot finds any problem in a situation
in helping him, it tries to solve such a problem by employing its knowledge and the
information acquired by inquiry to the people. This is an application of
integrative multimedia understanding based on intermediate knowledge representation
in mental-image description language (L-md), the formal language proposed in mental
image directed semantic theory.
C1 [Khummongkol, Rojanee; Yokota, Masao] Fukuoka Inst Technol, Fukuoka, Japan.
RP Yokota, M (reprint author), Fukuoka Inst Technol, Fukuoka, Japan.
EM b414002@bene.fit.ac.jp; yokota@fit.ac.jp
CR Ferrucci D, 2010, AAAI AI MAG FAL
Heylighen F, 1988, CYBERNET SYST, V88, P949
Khummongkol R, 2015, IEEE 7 INT C AW SCI
Russell S. J., 2010, ARTIFICIAL INTELLIGE
WEIZENBAUM J, 1966, COMMUN ACM, V9, P36, DOI 10.1145/365153.365168
Winograd T, 2006, ARTIF INTELL, V170, P1256, DOI 10.1016/j.artint.2006.10.011
Yokota M, 2012, APPL COMPUT INTELL S, DOI 10.1155/2012/184103
Yokota M, 2009, NATURAL LANGUAGE PROCESSING AND COGNITIVE SCIENCE, PROCEEDINGS,
P133
NR 8
TC 0
Z9 0
U1 0
U2 4
PU SPRINGER
PI NEW YORK
PA 233 SPRING ST, NEW YORK, NY 10013 USA
SN 1433-5298
EI 1614-7456
J9 ARTIF LIFE ROBOT
JI Artif. Life Robot.
PD DEC
PY 2016
VL 21
IS 4
BP 510
EP 519
DI 10.1007/s10015-016-0306-5
PG 10
WC Robotics
SC Robotics
GA EG3DC
UT WOS:000390923000017
DA 2018-01-22
ER

PT J
AU Ren, FJ
Huang, Z
AF Ren, Fuji
Huang, Zhong
TI Automatic Facial Expression Learning Method Based on Humanoid Robot
XIN-REN
SO IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS
LA English
DT Article
DE Expression mapping; forward kinematics model; humanoid robot; inverse
kinematics model; movement similarity; movement smoothness
ID FACE; REPRESENTATION; COMMUNICATION; MOVEMENTS; VIDEO
AB The ability of a humanoid robot to display human-like facial expressions is
crucial to the natural human-computer interaction. To fulfill this requirement for
an imitative humanoid robot, XIN-REN, an automatic facial expression learning
method is proposed. In this method, first, a forward kinematics model, which is
designed to reflect nonlinear mapping relationships between servo displacement
vectors and corresponding expression shape vectors, is converted into a linear
relationships between the mechanical energy of servo displacements and the
potential energy of feature points, based on the energy conservation principle.
Second, an improved inverse kinematics model is established under the constraints
of instantaneous similarity and movement smoothness. Finally, online expression
learning is employed to determine the optimal servo displacements for transferring
the facial expressions of a human performer to the robot. To illustrate the
performance of the proposed method, we conduct evaluation experiments on the
forward kinematics model and the inverse kinematics model, based on the data
collected from the robot's random states as well as fixed procedures by animators.
Further, we evaluate the facial imitation ability with different values of the
weighting factor, according to three sequential indicators (space-similarity, time-
similarity, and movement smoothness). Experimental results indicate that the
deviations in mean shape and position do not exceed 6 pixels and 3 pixels,
respectively, and the average servo displacement deviation does not exceed 0.8%.
Compared with other related studies, the proposed method maintains better space-
time similarity with the performer, besides ensuring smoother trajectory for
multiframe sequential imitation.
C1 [Ren, Fuji] Univ Tokushima, Fac Engn, Tokushima 7708501, Japan.
[Ren, Fuji; Huang, Zhong] Hefei Univ Technol, Sch Comp & Informat, Hefei 230009,
Peoples R China.
[Huang, Zhong] Anqing Normal Univ, Sch Phys & Elect Engn, Anqing 246000, Peoples
R China.
RP Huang, Z (reprint author), Hefei Univ Technol, Sch Comp & Informat, Hefei
230009, Peoples R China.
EM ren@is.tokushima-u.ac.jp; huangzhong3315@163.com
FU National Natural Science Foundation of China [61432004, 61472117];
Beijing Advanced Innovation Center for Imaging Technology
[BAICIT-2016012]; JSPS KAKENHI [15H01712]; Natural Science Research
Project of the Education Department of Anhui Province [AQKJ2015B013]
FX This work was supported in part by the National Natural Science
Foundation of China under Grant 61432004 and Grant 61472117, in part by
the Beijing Advanced Innovation Center for Imaging Technology (No.
BAICIT-2016012), and JSPS KAKENHI Grant (No. 15H01712), and in part by
the Natural Science Research Project of the Education Department of
Anhui Province (No. AQKJ2015B013). This paper was recommended by
Associate Editor J. Han. (Corresponding author: Zhong Huang.)
CR Ahn HS, 2013, PROCEEDINGS OF THE 2013 6TH IEEE CONFERENCE ON ROBOTICS,
AUTOMATION AND MECHATRONICS (RAM), P7, DOI 10.1109/RAM.2013.6758551
Ahn Ho Seok, 2012, P 2012 IEEE RAS INT, P799
Ahn Ho Seok, 2013, INT J HUM ROBOT, V10, P1
Asthana A, 2012, IEEE T VIS COMPUT GR, V18, P1511, DOI 10.1109/TVCG.2011.157
Becker-Asano C., 2011, P 6 INT WORKSH REC C, P1, DOI DOI
10.1109/WACI.2011.5953147
Breazeal C, 2002, DESIGNING SOCIABLE R
Canamero L, 2001, IEEE T SYST MAN CY A, V31, P454, DOI 10.1109/3468.952719
Dornaika F, 2013, ENG APPL ARTIF INTEL, V26, P467, DOI
10.1016/j.engappai.2012.09.002
Farag M. H., 2014, INT J MATH ARCH, V5, P33
Feng YT, 2012, COMPUT METHOD APPL M, V205, P169, DOI 10.1016/j.cma.2011.02.010
Gibert G, 2013, SPEECH COMMUN, V55, P135, DOI 10.1016/j.specom.2012.07.001
Gross R, 2005, IMAGE VISION COMPUT, V23, P1080, DOI 10.1016/j.imavis.2005.07.009
Habib A., 2014, P 2014 IEEE INT C AU, P1159
Ishiguro H, 2007, INT J ROBOT RES, V26, P105, DOI 10.1177/0278364907074474
Jaeckel P, 2008, ROBOT AUTON SYST, V56, P1042, DOI 10.1016/j.robot.2008.09.002
Kamide H, 2014, INT J HUM-COMPUT ST, V72, P451, DOI 10.1016/j.ijhcs.2014.01.004
Kanda T, 2008, IEEE T ROBOT, V24, P725, DOI 10.1109/TRO.2008.921566
Kim W, 2014, IEEE SIGNAL PROC LET, V21, P1336, DOI 10.1109/LSP.2014.2334656
Kondo Y, 2013, J HUM-ROBOT INTERACT, V2, P133, DOI 10.5898/JHRI.2.1.Kondo
Lin DC, 2013, IEEE T HUM-MACH SYST, V43, P479, DOI 10.1109/TSMC.2013.2277923
Magtanong E., 2012, LATEST ADV ROBOT KIN, P181, DOI 10.1007/978-94-007-4620-6_23
Manohar V, 2014, IEEE T HUM-MACH SYST, V44, P362, DOI 10.1109/THMS.2014.2309662
Mavridis N, 2015, ROBOT AUTON SYST, V63, P22, DOI 10.1016/j.robot.2014.09.031
Mohammad Y, 2015, INT J SOC ROBOT, V7, P497, DOI 10.1007/s12369-015-0282-2
Montoliu R, 2009, COMPUT VIS IMAGE UND, V113, P790, DOI
10.1016/j.cviu.2009.01.006
Nefti-Meziani S, 2015, ROBOT AUTON SYST, V68, P129, DOI
10.1016/j.robot.2014.12.016
Noda K, 2014, ROBOT AUTON SYST, V62, P721, DOI 10.1016/j.robot.2014.03.003
Ontanon S, 2014, EXPERT SYST APPL, V41, P5212, DOI 10.1016/j.eswa.2014.02.049
Park JW, 2015, J INTELL ROBOT SYST, V78, P443, DOI 10.1007/s10846-014-0066-1
Ren F, 2016, IEEE T AFFECT COMPUT, V7, P176, DOI 10.1109/TAFFC.2015.2457915
Ren FJ, 2015, IEEJ T ELECTR ELECTR, V10, P713, DOI 10.1002/tee.22151
RICE WR, 1989, EVOLUTION, V43, P223, DOI 10.1111/j.1558-5646.1989.tb04220.x
Sasaki K, 2006, GAIT POSTURE, V23, P383, DOI 10.1016/j.gaitpost.2005.05.002
Shayganfar M, 2012, IEEE INT C INT ROBOT, P4577, DOI 10.1109/IROS.2012.6385901
Shimizu T, 2015, IEEE T NEUR NET LEAR, V26, P1035, DOI
10.1109/TNNLS.2014.2333092
Smolyanskiy N, 2014, IMAGE VISION COMPUT, V32, P860, DOI
10.1016/j.imavis.2014.08.005
Tadesse Y., 2013, T CONTROL MECH SYST, V2, P337
Tadesse Y, 2012, J MECH ROBOT, V4, DOI 10.1115/1.4006519
Taix M, 2013, J COMPUT SCI-NETH, V4, P269, DOI 10.1016/flocs.2012.08.001
Trovato G., 2013, INT J HUM ROBOT, V10
Wang HW, 2013, NEUROCOMPUTING, V122, P490, DOI 10.1016/j.neucom.2013.05.025
Wilbers F., 2007, P IROS2007, P542
Yanik PM, 2014, IEEE T HUM-MACH SYST, V44, P41, DOI 10.1109/TSMC.2013.2291714
Yoo JK, 2009, ROBOT AUTON SYST, V57, P973, DOI 10.1016/j.robot.2009.07.012
Yu H, 2014, IEEE T HUM-MACH SYST, V44, P386, DOI 10.1109/THMS.2014.2313912
Zhu T., 2014, J ROBOT, V36, P647
NR 46
TC 0
Z9 0
U1 1
U2 12
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 2168-2291
EI 2168-2305
J9 IEEE T HUM-MACH SYST
JI IEEE T. Hum.-Mach. Syst.
PD DEC
PY 2016
VL 46
IS 6
BP 810
EP 821
DI 10.1109/THMS.2016.2599495
PG 12
WC Computer Science, Artificial Intelligence; Computer Science, Cybernetics
SC Computer Science
GA ED5AT
UT WOS:000388864400004
OA gold
DA 2018-01-22
ER

PT J
AU Ryoo, YJ
Nassiraei, AAO
AF Ryoo, Young-Jae
Nassiraei, Amir Ali Orough
TI Design of a Humanoid Shoulder Complex Emulating Human Shoulder Girdle
Motion Using the Minimum Number of Actuators
SO International Journal of Humanoid Robotics
LA English
DT Editorial Material
C1 [Ryoo, Young-Jae] Mokpo Natl Univ, Muan, South Korea.
[Nassiraei, Amir Ali Orough] Kyushu Inst Technol, Kitakyushu, Fukuoka, Japan.
RP Ryoo, YJ (reprint author), Mokpo Natl Univ, Muan, South Korea.
NR 0
TC 0
Z9 0
U1 0
U2 0
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD DEC
PY 2016
VL 13
IS 4
SI SI
AR 1602002
DI 10.1142/S0219843616020023
PG 2
WC Robotics
SC Robotics
GA EE1IL
UT WOS:000389334500002
DA 2018-01-22
ER

PT J
AU Tangkaratt, V
Morimoto, J
Sugiyama, M
AF Tangkaratt, Voot
Morimoto, Jun
Sugiyama, Masashi
TI Model-based reinforcement learning with dimension reduction
SO NEURAL NETWORKS
LA English
DT Article
DE Model-based reinforcement learning; Transition model estimation;
Sufficient dimension reduction
ID CONDITIONAL DENSITY-ESTIMATION; PARAMETER-BASED EXPLORATION; MUTUAL
INFORMATION; INVERSE REGRESSION; POLICY GRADIENTS
AB The goal of reinforcement learning is to learn an optimal policy which controls
an agent to acquire the maximum cumulative reward. The model-based reinforcement
learning approach learns a transition model of the environment from data, and then
derives the optimal policy using the transition model. However, learning an
accurate transition model in high-dimensional environments requires a large amount
of data which is difficult to obtain. To overcome this difficulty, in this paper,
we propose to combine model-based reinforcement learning with the recently
developed least-squares conditional entropy (LSCE) method, which simultaneously
performs transition model estimation and dimension reduction. We also further
extend the proposed method to imitation learning scenarios. The experimental
results show that policy search combined with LSCE performs well for high-
dimensional control tasks including real humanoid robot control. (C) 2016 Elsevier
Ltd. All rights reserved.
C1 [Tangkaratt, Voot] Univ Tokyo, Dept Comp Sci, Tokyo 1138654, Japan.
[Morimoto, Jun] ATR Computat Neurosci Lab, Dept Brain Robot Interface, Nara,
Japan.
[Sugiyama, Masashi] Univ Tokyo, Dept Complex Sci & Engn, Tokyo 1138654, Japan.
RP Tangkaratt, V (reprint author), Univ Tokyo, Dept Comp Sci, Tokyo 1138654, Japan.
EM voot@ms.k.u-tokyo.ac.jp; xmorimo@atr.jp; sugi@k.u-tokyo.ac.jp
FU KAKENHI [23120004, 25700022]; NEDO [15101157-0]
FX VT was supported by KAKENHI 23120004, JM was supported by KAKENHI
23120004 and NEDO 15101157-0, and MS was supported by KAKENHI 25700022.
CR Absil PA, 2008, OPTIMIZATION ALGORITHMS ON MATRIX MANIFOLDS, P1
Argall BD, 2009, ROBOT AUTON SYST, V57, P469, DOI 10.1016/j.robot.2008.10.024
ARONSZAJN N, 1950, T AM MATH SOC, V68, P337
Baxter J, 2001, J ARTIF INTELL RES, V15, P319
Boumal N, 2014, J MACH LEARN RES, V15, P1455
Boutilier C., 1995, P 14 INT JOINT C ART, V2, P1104
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Cook RD, 2000, COMMUN STAT-THEOR M, V29, P2109, DOI 10.1080/03610920008832598
Cook RD, 2005, J AM STAT ASSOC, V100, P410, DOI 10.1198/016214504000001501
Dean T., 1989, Computational Intelligence, V5, P142, DOI 10.1111/j.1467-
8640.1989.tb00324.x
Deisenroth M.P., 2011, P 28 INT C MACH LEAR, P465
FUKUMIZU K, 2009, THE ANNALS OF STATIS, V37, P1871, DOI DOI 10.1214/08-AOS637
Fukumizu K, 2014, J AM STAT ASSOC, V109, P359, DOI 10.1080/01621459.2013.838167
Guestrin C., 2002, MACH LEARN P 19 INT, P235
Hachiya H, 2010, LECT NOTES ARTIF INT, V6321, P474, DOI 10.1007/978-3-642-15880-
3_36
Ijspeert A., 2002, ROB AUT 2002 P ICRA0, V2002, P1398
Ko J, 2007, IEEE INT CONF ROBOT, P742, DOI 10.1109/ROBOT.2007.363075
Kroon M, 2009, EIGHTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND
APPLICATIONS, PROCEEDINGS, P324, DOI 10.1109/ICMLA.2009.71
Kupcsik A.G., 2013, P 27 AAAI C ART INT
LI KC, 1991, J AM STAT ASSOC, V86, P316, DOI 10.2307/2290563
Miyamae A., 2010, ADV NEURAL INFORM PR, P1660
Morimoto J, 2008, IEEE INT CONF ROBOT, P2711, DOI 10.1109/ROBOT.2008.4543621
Nguyen T., 2013, P 30 INT C MACH LEAR, P498
Pearson K, 1900, PHILOS MAG, V50, P157, DOI 10.1080/14786440009463897
Peters J, 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and
Systems, Vols 1-12, P2219, DOI 10.1109/IROS.2006.282564
Principe JC, 2000, J VLSI SIG PROCESS S, V26, P61, DOI 10.1023/A:1008143417156
Rasmussen CE, 2005, ADAPT COMPUT MACH LE, P1
Sainui J, 2014, IEICE T INF SYST, VE97D, P2806, DOI 10.1587/transinf.2014EDL8111
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Schaal S., 2009, SL SIMULATION REALTI
Sehnke F, 2010, NEURAL NETWORKS, V23, P551, DOI 10.1016/j.neunet.2009.12.004
Sugimoto N., 2014, 14 IEEE RAS INT C HU, P554
Sugimoto N, 2016, IEEE ROBOT AUTOM MAG, V23, P96, DOI 10.1109/MRA.2015.2511681
Sugiyama M, 2010, IEICE T INF SYST, VE93D, P583, DOI 10.1587/transinf.E93.D.583
Suzuki T, 2013, NEURAL COMPUT, V25, P725, DOI 10.1162/NECO_a_00407
Suzuki T, 2009, BMC BIOINFORMATICS, V10, DOI 10.1186/1471-2105-10-S1-S52
Tangkaratt V., 2015, ARXIV150801019 CORR
Tangkaratt V, 2015, NEURAL COMPUT, V27, P228, DOI 10.1162/NECO_a_00683
Tangkaratt V, 2014, NEURAL NETWORKS, V57, P128, DOI 10.1016/j.neunet.2014.06.006
Ueno T., 2012, ADV NEURAL INFORM PR, P2366
WILLIAMS RJ, 1992, MACH LEARN, V8, P229, DOI 10.1007/BF00992696
XIA YA, 2007, THE ANNALS OF STATIS, V35, P2654, DOI DOI
10.1214/009053607000000352
Zhao TT, 2013, NEURAL COMPUT, V25, P1512, DOI 10.1162/NECO_a_00452
Zhao TT, 2012, NEURAL NETWORKS, V26, P118, DOI 10.1016/j.neunet.2011.09.005
NR 44
TC 1
Z9 1
U1 5
U2 20
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 0893-6080
EI 1879-2782
J9 NEURAL NETWORKS
JI Neural Netw.
PD DEC
PY 2016
VL 84
BP 1
EP 16
DI 10.1016/j.neunet.2016.08.005
PG 16
WC Computer Science, Artificial Intelligence; Neurosciences
SC Computer Science; Neurosciences & Neurology
GA ED0RF
UT WOS:000388548900001
PM 27639719
DA 2018-01-22
ER

PT J
AU Solis, J
Sugita, Y
Petersen, K
Takanishi, A
AF Solis, Jorge
Sugita, Yoshihisa
Petersen, Klaus
Takanishi, Atsuo
TI Development of an anthropomorphic musical performance robot capable of
playing the flute and saxophone: Embedding pressure sensors into the
artificial lips as well as the re-designing of the artificial lips and
lung mechanisms
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Humanoid Robot; Pressure sensing; Mechanical Design; Music
AB The authors are developing anthropomorphic musical performance robots as an
approach to understand the human motor control and to enhance the human-robot
interaction from an engineering point of view. For this purpose; since 1990, we
have developed an anthropomorphic flutist robot. As one of our long-term
approaches, we aim to develop an anthropomorphic musical performance robot capable
of playing different kinds of woodwind instruments, e.g. flute and saxophone. In
this paper, the improvements of the mechanical design and sensing system of the
Waseda Flutist Robot No. 4 Refined V (WF-4RV) are detailed. As for the sensing
system, an array of sensors has been designed to detect the lip's pressure
distribution. On the other hand, the lips and lung have been re-designed to enable
the flutist robot to play the saxophone. (C) 2016 Elsevier B.V. All rights
reserved.
C1 [Solis, Jorge] Karlstad Univ, Dept Engn & Phys, Karlstad, Sweden.
[Solis, Jorge; Petersen, Klaus] Waseda Univ, Res Inst Sci & Engn, Tokyo, Japan.
[Sugita, Yoshihisa] Waseda Univ, Grad Sch Adv Sci & Engn, Tokyo, Japan.
[Takanishi, Atsuo] Waseda Univ, Dept Mech Engn, Tokyo, Japan.
[Solis, Jorge; Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst, Tokyo, Japan.
RP Solis, J (reprint author), Karlstad Univ, Dept Engn & Phys, Karlstad, Sweden.
EM solis@ieee.org; sugita@toki.waseda.jp;
klaus@takanishi.mech.waseda.ac.jp; contact@takanishi.mech.waseda.ac.jp
FU Gifu Prefecture; Global COE Program "Global Robot Academia" from the
Ministry of Education, Culture, Sports, Science and Technology of Japan;
Japanese Ministry of Education, Culture, Sports, Science and Technology
[23700238]
FX Part of this research was done at the Humanoid Robotics Institute (HRI),
Waseda University and at the Center for Advanced Biomedical Sciences
(TWINs). This research is supported (in part) by a Gifu-in-Aid for the
WABOT-HOUSE Project by Gifu Prefecture. This work is also supported (in
part) by Global COE Program "Global Robot Academia" from the Ministry of
Education, Culture, Sports, Science and Technology of Japan. This study
is supported (in part) by a Grant-in-Aid for Young Scientists (B)
provided by the Japanese Ministry of Education, Culture, Sports, Science
and Technology, No. 23700238 (Jorge Solis, PI).
CR Adachi S., 2004, Acoustical Science and Technology, V25, P400, DOI
10.1250/ast.25.400
Ando Y., 1970, PRINCIPLES SOUND PRO, P297
de vaucanson J., MECANISME FLUTEUR AU, V5
Doc JB, 2014, ACTA ACUST UNITED AC, V100, P543, DOI 10.3813/AAA.918734
Fuks L., 1996, BLOWING PRESSURES RE, P48
Isoda S, 2003, IEEE INT CONF ROBOT, P3582
Klaedefabrik K.B., 2005, M RICHES MASKINERNE, P10
Petersen K., 2010, AUTONOMOUS ROBOTS J, V28, P439
SCHUMACHER RT, 1981, ACUSTICA, V48, P73
SOLIS J, 2007, P 16 IEEE INT C ROB, P780
Solis J., 2012, P 2 IFTOMM AS C MECH, pID77
Solis J, 2006, INT J HUM ROBOT, V3, P127, DOI 10.1142/S0219843606000709
Solis J, 2010, ADV ROBOTICS, V24, P629, DOI 10.1163/016918610X493516
Solis J, 2009, MECH MACH THEORY, V44, P527, DOI
10.1016/j.mechmachtheory.2008.09.002
Takashima S., 2006, P IEEE RSJ INT C INT, P30
[Anonymous], 2009, TRUMPED ROBOT
NR 16
TC 0
Z9 0
U1 7
U2 18
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD DEC
PY 2016
VL 86
BP 174
EP 183
DI 10.1016/j.robot.2016.08.024
PG 10
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA EC3UH
UT WOS:000388051800015
DA 2018-01-22
ER

PT J
AU Li, X
Imanishi, H
Minami, M
Matsuno, T
Yanou, A
AF Li, Xiang
Imanishi, Hiroki
Minami, Mamoru
Matsuno, Takayuki
Yanou, Akira
TI Dynamical Model of Walking Transition Considering Nonlinear Friction
with Floor
SO JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT
INFORMATICS
LA English
DT Article
DE humanoid; slipping; friction; bipedal; dynamical
AB Biped locomotion created by a controller based on Zero-Moment Point (ZMP) known
as reliable control method looks different from human's walking on the view point
that ZMP-based walking does not include falling state, and it's like monkey walking
because of knee-bended walking profiles. However, the walking control that does not
depend on ZMP is vulnerable to turnover. Therefore, keeping the event-driven
walking of dynamical motion stable is important issue for realization of human-like
natural walking. In this research, a walking model of humanoid robot including
slipping, bumping, surface-contacting and line-contacting of foot is discussed, and
its dynamical equation is derived by the Extended NE method. In this paper we
introduce the humanoid model which including the slipping foot and verify the
model.
C1 [Li, Xiang; Imanishi, Hiroki; Minami, Mamoru; Matsuno, Takayuki; Yanou, Akira]
Okayama Univ, Grad Sch Nat Sci & Technol, Kita Ku, 3-1-1 Tsushima Naka, Okayama,
Okayama 7008530, Japan.
RP Li, X (reprint author), Okayama Univ, Grad Sch Nat Sci & Technol, Kita Ku, 3-1-1
Tsushima Naka, Okayama, Okayama 7008530, Japan.
EM pzkm87r2@s.okayama-u.ac.jp
CR Aoyama T, 2009, 2009 IEEE ASME T MEC, P712
Chevallereau C, 2009, IEEE T ROBOTICS, V25
Dau H, 2010, P IEEE RSJ INT C INT, P172
Featherstone R, 2000, IEEE INT C ROB AUT, P826, DOI [10.1109/robot.2000.844153,
DOI 10.1109/ROBOT.2000.844153]
Feng T, 2015, 20 INT S ART LIF ROB
HEMAMI H, 1979, IEEE T AUTOMAT CONTR, V24, P526, DOI 10.1109/TAC.1979.1102105
Herdt A, 2010, P IEEE RSJ INT C INT, P190
Huang Y, 2010, IEEE INT C INT ROBOT, P4077, DOI 10.1109/IROS.2010.5650421
Kajita S, 2010, P IEEE RSJ INT C INT, P4489, DOI DOI 10.1109/IROS.2010.5651082
Kobayashi Y, 2013, IEEE INT C ROB AUT I, P4764
Kouchi M., 2000, ANTHROPOMETRIC DATAB
Maeba T, 2012, 2012 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT
MECHATRONICS (AIM), P7, DOI 10.1109/AIM.2012.6265962
Nakamura Y., 2000, J RSJ, V18, P435
Nakano K., 2007, NIPPON GOMU KYOKAISH, V80, P134
Nishiguchi J, 2014, T JSME, V80
Park J. H., 1998, P IEEE INT C ROB AUT, V4, P3528
Sobotka M, 2005, P 16 IFAC WORLD C
Sugihara T., 2003, P 2 INT S AD MOT AN
Tokashiki L. R., 1999, T JAPAN HYDRAULICS P, V30, P110
Ueda Y., 1996, TECHNICAL REPORT LEI, V96, P41
Wieber PB, 2006, P INT C HUM ROB
Wieber PB, 2008, P IEEE RSJ INT C INT
Wu TY, 2010, IEEE INT C INT ROBOT, P4922, DOI 10.1109/IROS.2010.5651490
NR 23
TC 1
Z9 1
U1 0
U2 1
PU FUJI TECHNOLOGY PRESS LTD
PI TOKYO
PA 4F TORANOMON SANGYO BLDG, 2-29, TORANOMON 1-CHOME, MINATO-KU, TOKYO,
105-0001, JAPAN
SN 1343-0130
EI 1883-8014
J9 J ADV COMPUT INTELL
JI J. Adv. Comput. Intell. Inform.
PD NOV
PY 2016
VL 20
IS 6
BP 974
EP 982
PG 9
WC Computer Science, Artificial Intelligence
SC Computer Science
GA EJ8IE
UT WOS:000393468100013
DA 2018-01-22
ER

PT J
AU Mikamori, M
Eguchi, H
Wada, H
Gotoh, K
Nakamura, M
Miyauchi, K
Umeno, M
Natsume, T
Doki, Y
Mori, M
AF Mikamori, Manabu
Eguchi, Hidetoshi
Wada, Hiroshi
Gotoh, Kunihito
Nakamura, Miki
Miyauchi, Kohei
Umeno, Makoto
Natsume, Tohru
Doki, Yuichiro
Mori, Masaki
TI Highly precise medical experiments by general purpose humanoid robot:
Toward the next generation
SO JOURNAL OF GASTROENTEROLOGY AND HEPATOLOGY
LA English
DT Meeting Abstract
C1 [Mikamori, Manabu; Eguchi, Hidetoshi; Wada, Hiroshi; Gotoh, Kunihito; Doki,
Yuichiro; Mori, Masaki] Osaka Univ, Dept Gastrointestinal Surg, Osaka, Japan.
[Nakamura, Miki; Miyauchi, Kohei; Umeno, Makoto] YASKAWA Elect Corp, Biomed
Business Div, Dept Technol, Kitakyushu, Fukuoka, Japan.
[Natsume, Tohru] Natl Inst Adv Ind Sci & Technol, Mol Profiling Res Ctr Drug
Discovery, Tsukuba, Ibaraki, Japan.
[Natsume, Tohru] Robot Biol Inst Inc, Tokyo, Japan.
NR 0
TC 0
Z9 0
U1 0
U2 1
PU WILEY-BLACKWELL
PI HOBOKEN
PA 111 RIVER ST, HOBOKEN 07030-5774, NJ USA
SN 0815-9319
EI 1440-1746
J9 J GASTROEN HEPATOL
JI J. Gastroenterol. Hepatol.
PD NOV
PY 2016
VL 31
SU 3
MA 1344
BP 435
EP 436
PG 2
WC Gastroenterology & Hepatology
SC Gastroenterology & Hepatology
GA EA6YV
UT WOS:000386776003368
DA 2018-01-22
ER

PT J
AU Jovic, J
Bonnet, V
Fattal, C
Fraisse, P
Coste, CA
AF Jovic, J.
Bonnet, V.
Fattal, C.
Fraisse, P.
Coste, Ch. Azevedo
TI A new 3D center of mass control approach for FES-assisted standing:
First experimental evaluation with a humanoid robot
SO MEDICAL ENGINEERING & PHYSICS
LA English
DT Article
DE Functional Electrical Stimulation (FES); Closed-loop control; Posture;
Transfer assistance
ID FUNCTIONAL NEUROMUSCULAR STIMULATION; SPINAL-CORD-INJURY;
FEEDBACK-CONTROL; JOINT ANGLE; SIMULATION; PARAPLEGIA; DESIGN; BODY
AB This paper proposes a new control framework to restore the coordination between
upper (functional) and lower (paralyzed) limbs in the context of functional
electrical stimulation in completely paraplegic individuals. A kinematic decoupling
between the lower and upper limbs controls the 3D whole-body center of mass
location and the relative foot positions by acting only on the lower-limb joints.
The upper limbs are free to move under voluntary control, and are seen as a
perturbation for the lower limbs. An experimental validation of this paradigm using
a humanoid robot demonstrates the real-time applicability and robustness of the
method. Different scenarios mimicking the motion of a healthy subject are
investigated. The proposed method can maintain bipedal balance and track the
desired center of mass trajectories under movement disturbances of the upper limbs
with an error inferior to 0.01 m under any conditions. (C) 2016 IPEM. Published by
Elsevier Ltd. All rights reserved.
C1 [Jovic, J.; Coste, Ch. Azevedo] LIRMM Montpellier, INRIA French Inst Res Comp
Sci & Automat, Montpellier, France.
[Bonnet, V.] Tokyo Univ Agr & Technol, Tokyo, Japan.
[Fattal, C.] Ctr Reeducat Fonct COS Divio, Dijon, France.
[Fraisse, P.] Univ Montpellier, LIRMM Montpellier Lab Comp Sci Robot &
Microelect, Montpellier, France.
RP Coste, CA (reprint author), LIRMM Montpellier, INRIA French Inst Res Comp Sci &
Automat, Montpellier, France.
EM christine.azevedo@inria.fr
CR Ajoudani A, 2009, IEEE T BIO-MED ENG, V56, P1771, DOI 10.1109/TBME.2009.2017030
Bonnet V, 2014, IEEE EMBS INT C ENG
Bonnet V, 2015, GAIT POSTURE, V41, P70, DOI 10.1016/j.gaitpost.2014.08.017
Bonnet V, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, P2525, DOI 10.1109/IROS.2009.5354529
Brissot R, 2000, SPINE, V25, P501, DOI 10.1097/00007632-200002150-00018
Chelius G, 2011, IEEE-ASME T MECH, V16, P878, DOI 10.1109/TMECH.2011.2161324
Azevedo Coste C, 2015, BMS INT IFAC S BIOL
Azevedo Coste C., 2005, J AUTOMATIC CONTRO S, V15, P12
Duarte M, 1999, MOTOR CONTROL, V3, P12, DOI 10.1123/mcj.3.1.12
Dumas R, 2007, J BIOMECH, V40, P543, DOI 10.1016/j.jbiomech.2006.02.013
Esquenazi A, 2012, AM J PHYS MED REHAB, V91, P911, DOI
10.1097/PHM.0b013e318269d9a3
Galdeano D, 2012, SYROCO INT IFAC S RO, P48590
Gollee H, 2004, IEEE T NEUR SYS REH, V12, P73, DOI 10.1109/TNSRE.2003.822765
Guiraud D, 2006, J NEURAL ENG, V3, P268, DOI 10.1088/1741-2560/3/4/003
Hunt K J, 1997, IEEE Trans Rehabil Eng, V5, P331, DOI 10.1109/86.650287
JAEGER RJ, 1986, J BIOMECH, V19, P825, DOI 10.1016/0021-9290(86)90133-8
Jaime RP, 2002, IEEE T NEUR SYS REH, V10, P239, DOI 10.1109/TNSRE.2002.806830
Jovic J, 2012, IEEE ENG MED BIO, P325, DOI 10.1109/EMBC.2012.6345935
Jovic J, 2011, IFES ANN INT FES SOC
Jovic J, 2016, IEEE T ROBOT, V32, P726, DOI 10.1109/TRO.2016.2558190
Jovic J, 2015, NEUROMODULATION, V18, P736, DOI 10.1111/ner.12286
Khalil W., 2004, MODELING IDENTIFICAT
KHANG G, 1989, IEEE T BIO-MED ENG, V36, P885, DOI 10.1109/10.35297
KHANG G, 1989, IEEE T BIO-MED ENG, V36, P873, DOI 10.1109/10.35296
Kim JY, 2006, IEEE T NEUR SYS REH, V14, P46, DOI 10.1109/TNSRE.2006.870489
Kim S, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, P2518, DOI 10.1109/IROS.2009.5354271
Kralj AR, 1989, FUNCTIONAL ELECT STI
Bo APL, 2011, IEEE ENG MED BIO, P3479, DOI 10.1109/IEMBS.2011.6090940
Lengagne S, 2011, IEEE T ROBOT, V27, P1095, DOI 10.1109/TRO.2011.2162998
Lynch CL, 2012, IEEE T NEUR SYS REH, V20, P539, DOI 10.1109/TNSRE.2012.2185065
MACIEJEWSKI AA, 1989, INT J ROBOT RES, V8, P63, DOI 10.1177/027836498900800605
Matjacic Z, 1998, IEEE Trans Rehabil Eng, V6, P125, DOI 10.1109/86.681178
Matjacic Z, 1998, IEEE Trans Rehabil Eng, V6, P139, DOI 10.1109/86.681179
Miura K, 2013, IEEE INT C ROB AUT, P671
Montecillo-Puente FJ, 2010, ICINCO 2010: PROCEEDINGS OF THE 7TH INTERNATIONAL
CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, VOL 2, P22
Munih M, 1997, IEEE Trans Rehabil Eng, V5, P341, DOI 10.1109/86.650288
NAKAMURA Y, 1987, INT J ROBOT RES, V6, P3, DOI 10.1177/027836498700600201
Nataraj R, 2012, J REHABIL RES DEV, V49, P279, DOI 10.1682/JRRD.2010.12.0235
Nataraj R, 2010, IEEE T NEUR SYS REH, V18, P646, DOI 10.1109/TNSRE.2010.2083693
Popovic D, 2000, CONTROL MOVEMENT PHY
Qiu S, 2014, IEEE ENG MED BIO, P2561, DOI 10.1109/EMBC.2014.6944145
Rao G, 2006, J BIOMECH, V39, P1531, DOI 10.1016/j.jbiomech.2005.04.014
Salini J, 2010, ADVANCES IN ROBOT KINEMATICS: MOTION IN MAN AND MACHINE, P177,
DOI 10.1007/978-90-481-9262-5_19
Slotine J.-J. E., 1991, 5 INT C ADV ROB 1991, P1211, DOI [DOI
10.1109/ICAR.1991.240390, 10.1109/ICAR.1991.240390]
Soetanto D, 2001, J BIOMECH, V34, P1589, DOI 10.1016/S0021-9290(01)00144-0
Vette AH, 2009, NEUROMODULATION, V12, P22, DOI 10.1111/j.1525-1403.2009.00184.x
NR 46
TC 0
Z9 0
U1 0
U2 5
PU ELSEVIER SCI LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, OXON, ENGLAND
SN 1350-4533
EI 1873-4030
J9 MED ENG PHYS
JI Med. Eng. Phys.
PD NOV
PY 2016
VL 38
IS 11
SI SI
BP 1270
EP 1278
DI 10.1016/j.medengphy.2016.09.002
PG 9
WC Engineering, Biomedical
SC Engineering
GA EB2MZ
UT WOS:000387197800015
PM 27692585
DA 2018-01-22
ER

PT J
AU Chretien, B
Escande, A
Kheddar, A
AF Chretien, Benjamin
Escande, Adrien
Kheddar, Abderrahmane
TI GPU Robot Motion Planning Using Semi-Infinite Nonlinear Programming
SO IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
LA English
DT Article
DE GPGPU; CUDA; nonlinear optimization; motion planning; robotics; parallel
computing; HPC
ID PARALLEL O(LOG(N)) CALCULATION; ARTICULATED-BODY ALGORITHM; DYNAMICS;
OPTIMIZATION; MODEL
AB We propose a many-core GPU implementation of robotic motion planning formulated
as a semi-infinite optimization program. Our approach computes the constraints and
their gradients in parallel, and feeds the result to a nonlinear optimization
solver running on the CPU. To ensure the continuous satisfaction of our
constraints, we use polynomial approximations over time intervals. Because each
constraint and its gradient can be evaluated independently for each time interval,
we end up with a highly parallelizable problem that can take advantage of many-core
architectures. Classic robotic computations (geometry, kinematics, and dynamics)
can also benefit from parallel processors, and we carefully study their
implementation in our context. This results in having a full constraint evaluator
running on the GPU. We present several optimization examples with a humanoid robot.
They reveal substantial improvements in terms of computation performance compared
to a parallel CPU version.
C1 [Chretien, Benjamin; Kheddar, Abderrahmane] CNRS UM LIRMM, Interact Digital
Human Grp, Montpellier, France.
[Chretien, Benjamin; Escande, Adrien; Kheddar, Abderrahmane] CNRS AIST JRL, RL
UMI3218, Tsukuba, Ibaraki, Japan.
RP Chretien, B (reprint author), CNRS UM LIRMM, Interact Digital Human Grp,
Montpellier, France.; Chretien, B (reprint author), CNRS AIST JRL, RL UMI3218,
Tsukuba, Ibaraki, Japan.
EM chretien.b@gmail.com; adrien.escande@gmail.com; kheddar@gmail.com
OI Chretien, Benjamin/0000-0001-6104-9295
FU EU FP7 IP RoboHow.Cog; Japan Society for Promotion of Science (JSPS):
Postdoctoral Fellowship [P13786]; [25280096]
FX This work is supported by EU FP7 IP RoboHow.Cog (www.robohow.eu), and by
the Japan Society for Promotion of Science (JSPS): Postdoctoral
Fellowship P13786, and Grant-in-Aid for Scientific Research (B)
25280096. Part of this work was presented (oral presentation only) at
the International Conference on High Performance Scientific Computing
(HPSC), 16-20 March 2015, Hanoi, Vietnam.
CR WACHTER A., 2006, MATH PROGRAM, V106
Bell N., 2008, NVR2008004, P1
Bhalerao KD, 2012, MECH MACH THEORY, V53, P86, DOI
10.1016/j.mechmachtheory.2012.03.001
Chretien B, 2015, IEEE INT C INT ROBOT, P3956, DOI 10.1109/IROS.2015.7353934
EGECIOGLU O, 1990, BIT, V30, P268, DOI 10.1007/BF02017348
Escande A, 2013, ROBOT AUTON SYST, V61, P428, DOI 10.1016/j.robot.2013.01.008
Featherstone R, 1999, INT J ROBOT RES, V18, P876, DOI 10.1177/02783649922066628
Featherstone R, 1999, INT J ROBOT RES, V18, P867, DOI 10.1177/02783649922066619
Guilbert M, 2008, INT J ROBOT RES, V27, P629, DOI 10.1177/0278364908090465
Ha SW, 2013, IEEE T PARALL DISTR, V24, P2324, DOI 10.1109/TPDS.2012.336
Han TD, 2011, IEEE T PARALL DISTR, V22, P78, DOI 10.1109/TPDS.2010.62
Harris Mark, 2007, NVIDIA DEV TECHNOLOG, V2, P4
HEMAMI H, 1982, IEEE T AUTOMAT CONTR, V27, P376, DOI 10.1109/TAC.1982.1102922
Izzo D, 2012, STUD COMPUT INTELL, V415, P151
Kozikowski G, 2013, LECT NOTES COMPUT SC, V7782, P489, DOI 10.1007/978-3-642-
36803-5_37
Lengagne S., 2007, P 7 IEEE RAS INT C H
Lengagne S, 2013, INT J ROBOT RES, V32, P1104, DOI 10.1177/0278364913478990
Miossec S., 2006, P 2006 IEEE INT C RO, P299
Moulard T., 2014, J ROBOT SOC JAPAN, V32, P536
Pan J, 2012, INT J ROBOT RES, V31, P187, DOI 10.1177/0278364911429335
Park C, 2014, INT J HUM ROBOT, V11, DOI 10.1142/S0219843614410011
Ratliff N., 2009, P IEEE INT C ROB AUT, P489, DOI DOI 10.1109/ROBOT.2009.5152817
Rustico E, 2014, IEEE T PARALL DISTR, V25, P43, DOI 10.1109/TPDS.2012.340
Schulman J, 2014, INT J ROBOT RES, V33, P1251, DOI 10.1177/0278364914528132
Smith E, 2012, LECT NOTES COMPUT SC, V7203, P681, DOI 10.1007/978-3-642-31464-
3_69
Tasora A., 2011, MULTIBODY DYNAMICS, V23
Volkov V., 2010, P GPU TECHN C, V10, P10
Werkhoven B. V., 2014, P 14 IEEE ACM INT S, P11
Wong H, 2010, INT SYM PERFORM ANAL, P235, DOI 10.1109/ISPASS.2010.5452013
Yamane K, 2006, IEEE-RAS INT C HUMAN, P554, DOI 10.1109/ICHR.2006.321328
Yamane K, 2009, INT J ROBOT RES, V28, P622, DOI 10.1177/0278364909102350
Zhang JJ, 1998, IEEE T SYST MAN CY C, V28, P467, DOI 10.1109/5326.704590
NR 32
TC 3
Z9 3
U1 1
U2 4
PU IEEE COMPUTER SOC
PI LOS ALAMITOS
PA 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1314 USA
SN 1045-9219
EI 1558-2183
J9 IEEE T PARALL DISTR
JI IEEE Trans. Parallel Distrib. Syst.
PD OCT 1
PY 2016
VL 27
IS 10
BP 2926
EP 2939
DI 10.1109/TPDS.2016.2521373
PG 14
WC Computer Science, Theory & Methods; Engineering, Electrical & Electronic
SC Computer Science; Engineering
GA DX2YI
UT WOS:000384239300011
DA 2018-01-22
ER

PT J
AU Alimardani, M
Nishio, S
Ishiguro, H
AF Alimardani, Maryam
Nishio, Shuichi
Ishiguro, Hiroshi
TI The Importance of Visual Feedback Design in BCIs; from Embodiment to
Motor Imagery Learning
SO PLOS ONE
LA English
DT Article
ID BRAIN-COMPUTER INTERFACE; MIRROR-NEURON SYSTEM; MU-RHYTHM; HUMANOID
ROBOT; SSVEP BCI; EEG; CLASSIFICATION; PROSTHESIS; OWNERSHIP; MOVEMENT
AB Brain computer interfaces (BCIs) have been developed and implemented in many
areas as a new communication channel between the human brain and external devices.
Despite their rapid growth and broad popularity, the inaccurate performance and
cost of user-training are yet the main issues that prevent their application out of
the research and clinical environment. We previously introduced a BCI system for
the control of a very humanlike android that could raise a sense of embodiment and
agency in the operators only by imagining a movement (motor imagery) and watching
the robot perform it. Also using the same setup, we further discovered that the
positive bias of subjects' performance both increased their sensation of embodiment
and improved their motor imagery skills in a short period. In this work, we studied
the shared mechanism between the experience of embodiment and motor imagery. We
compared the trend of motor imagery learning when two groups of subjects BCI-
operated different looking robots, a very humanlike android's hands and a pair of
metallic gripper. Although our experiments did not show a significant change of
learning between the two groups immediately during one session, the android group
revealed better motor imagery skills in the follow up session when both groups
repeated the task using the non-humanlike gripper. This result shows that motor
imagery skills learnt during the BCI-operation of humanlike hands are more robust
to time and visual feedback changes. We discuss the role of embodiment and mirror
neuron system in such outcome and propose the application of androids for efficient
BCI training.
C1 [Alimardani, Maryam] Univ Tokyo, Grad Sch Arts & Sci, Dept Gen Syst Studies,
Tokyo, Japan.
[Nishio, Shuichi; Ishiguro, Hiroshi] Adv Telecommun Res Inst Int ATR, Kyoto,
Japan.
[Ishiguro, Hiroshi] Osaka Univ, Grad Sch Engn Sci, Dept Syst Innovat, Osaka,
Japan.
RP Alimardani, M (reprint author), Univ Tokyo, Grad Sch Arts & Sci, Dept Gen Syst
Studies, Tokyo, Japan.
EM maryam@ardbeg.c.u-tokyo.ac.jp
FU KAKENHI [25220004, 26540109, 15F15046]; ImPACT Program of Council for
Science, Technology and Innovation (Cabinet Office, Government of Japan)
FX This research was supported by KAKENHI
(https://www.jsps.go.jp/english/e-grants/) (Grant-in-Aid for Scientific
Research) grant number 25220004 to HI, 26540109 to SN and 15F15046 to
MA. This research was also supported by ImPACT Program of Council for
Science, Technology and Innovation (Cabinet Office, Government of Japan)
to SN. The funders had no role in study design, data collection and
analysis, decision to publish, or preparation of the manuscript.; This
research was supported by Grants-in-Aid for Scientific Research
25220004, 26540109 and 15F15046 and also by ImPACT Program of Council
for Science, Technology and Innovation (Cabinet Office, Government of
Japan).
CR Abbott A, 2006, NATURE, V442, P125, DOI 10.1038/442125a
Adams JA, 1975, P HUM FACT ERG SOC A, V19, P162
Alimardani M, 2013, FRONTIERS SYSTEMS NE, V8, P52
Alimardani M, 2014, BIOM ROB BIOM 2014 5, P403
Alimardani M, 2013, SCI REPORTS, P3
Anatole L, 2008, COMPUTER, V1, P66
ANNETT J, 1995, NEUROPSYCHOLOGIA, V33, P1395, DOI 10.1016/0028-3932(95)00072-B
Barbero A, 2010, J NEUROENG REHABIL, V7, DOI 10.1186/1743-0003-7-34
Botvinick M, 1998, NATURE, V391, P756, DOI 10.1038/35784
Cecotti H, 2011, J PHYSIOL-PARIS, V105, P106, DOI
10.1016/j.jphysparis.2011.08.003
Cohen O, 2014, PRESENCE-TELEOP VIRT, V23, P229, DOI 10.1162/PRES_a_00191
Cohen O, 2014, J NEURAL ENG, V11, DOI 10.1088/1741-2560/11/3/035006
Curran EA, 2003, BRAIN COGNITION, P51
Dal Seno B, 2010, COMPUT INTEL NEUROSC, V2010, P1
de Vries S, 2007, J REHABIL MED, V39, P5, DOI 10.2340/16501977-0020
Decety J, 1996, BEHAV BRAIN RES, V77, P45, DOI 10.1016/0166-4328(95)00225-1
Ehrsson HH, 2003, J NEUROPHYSIOL, P90
Evans N, 2013, NEUROIMAGE, V64, P216, DOI 10.1016/j.neuroimage.2012.09.027
Fabiani GE, 2004, IEEE T NEUR SYS REH, V12, P331, DOI 10.1109/TNSRE.2004.834627
FLEISHMAN EA, 1963, J EXP PSYCHOL, V66, P6, DOI 10.1037/h0046677
Fukumura K, 2007, INT J NEUROSCI, V117, P1039, DOI 10.1080/00207450600936841
Gazzola V, 2007, NEUROIMAGE, V35, P1674, DOI 10.1016/j.neuroimage.2007.02.003
Gomez-Rodriguez M, 2011, J NEURAL ENG, V8, DOI 10.1088/1741-2560/8/3/036005
Gonzalez-Franco M, 2011, IEEE ENG MED BIO, P6323, DOI 10.1109/IEMBS.2011.6091560
Guger C, 2000, IEEE T REHABIL ENG, V8, P447, DOI 10.1109/86.895947
Guger C, 1999, P AAAT 5 EUR C ADV A, P3
Guger C, 2009, NEUROSCI LETT, V462, P94, DOI 10.1016/j.neulet.2009.06.045
Horki P, 2011, MED BIOL ENG COMPUT, V49, P567, DOI 10.1007/s11517-011-0750-2
Jackson A, 2006, IEEE T NEUR SYS REH, V14, P187, DOI 10.1109/TNSRE.2006.875547
Jeannerod M, 1995, CURR OPIN NEUROBIOL, V5, P727, DOI 10.1016/0959-
4388(95)80099-9
JEANNEROD M, 1995, NEUROPSYCHOLOGIA, V33, P1419, DOI 10.1016/0028-3932(95)00073-
C
Jin J, 2015, INT J NEURAL SYST, V25, DOI 10.1142/S0129065715500112
Kaiser V, 2011, FRONT NEUROSCI-SWITZ, V5, DOI 10.3389/fnins.2011.00086
Kauhanen L, 2006, P 3 INT BRAIN COMP I
Kishore S, 2014, PRESENCE-TELEOP VIRT, V23, P242, DOI 10.1162/PRES_a_00192
Lacourse MG, 2004, J REHABIL RES DEV, V41, P505, DOI 10.1682/JRRD.2004.04.0505
Leeb R, 2007, COMPUTATIONAL INTELL
Leonardis D, 2014, PRESENCE-TELEOP VIRT, V23, P253, DOI 10.1162/PRES_a_00190
Lorey B, 2009, EXP BRAIN RES, V194, P233, DOI 10.1007/s00221-008-1693-1
Lotte F, FLAWS CURRENT HUMAN
Lotte F., 2012, PRACTICAL BRAIN COMP, P197, DOI DOI 10.1007/978-3-642-29746-5_10
Lotze M, 1999, J COGNITIVE NEUROSCI, V11, P491, DOI 10.1162/089892999563553
Mercier C, 2008, CEREB CORTEX, V18, P272, DOI 10.1093/cercor/bhm052
Miller KJ, 2010, P NATL ACAD SCI USA, V107, P4430, DOI 10.1073/pnas.0913697107
Muller-Putz GR, 2008, IEEE T BIO-MED ENG, V55, P361, DOI
10.1109/TBME.2007.897815
Munzert J, 2009, BRAIN RES REV, V60, P306, DOI 10.1016/j.brainresrev.2008.12.024
Muthukumaraswamy SD, 2004, COGNITIVE BRAIN RES, V19, P195, DOI
10.1016/j.cogbrainres.2003.12.001
Muthukumaraswamy SD, 2004, PSYCHOPHYSIOLOGY, V41, P152, DOI 10.1046/j.1469-
8986.2003.00129.x
Neumann N, 2003, IEEE T NEUR SYS REH, V11, P169, DOI 10.1109/TNSRE.2003.814431
Neuper C, 1999, J CLIN NEUROPHYSIOL, V16, P373, DOI 10.1097/00004691-199907000-
00010
Neuper C, 2005, COGNITIVE BRAIN RES, V25, P668, DOI
10.1016/j.cogbrainres.2005.08.014
Neuper C, 2006, PROG BRAIN RES, V159, P393, DOI 10.1016/S0079-6123(06)59025-9
Neuper C, 2009, CLIN NEUROPHYSIOL, V120, P239, DOI 10.1016/j.clinph.2008.11.015
Oberman LM, 2007, NEUROCOMPUTING, V70, P2194, DOI 10.1016/j.neucom.2006.02.024
Ortner R, 2011, IEEE T NEUR SYS REH, V19, P1, DOI 10.1109/TNSRE.2010.2076364
Perez-Marcos D, 2009, NEUROREPORT, V20, P589, DOI 10.1097/WNR.0b013e32832a0a2a
Petit D, 2015, IEEE INT CONF ROBOT, P2882, DOI 10.1109/ICRA.2015.7139592
Pfurtscheller G, 2006, BRAIN RES, V1071, P145, DOI
10.1016/j.brainres.2005.11.083
Pfurtscheller G, 2006, NEUROIMAGE, V31, P153, DOI
10.1016/j.neuroimage.2005.12.003
Pfurtscheller G, 2001, P IEEE, V89, P1123, DOI 10.1109/5.939829
Pfurtscheller G, 2006, PROG BRAIN RES, V159, P433, DOI 10.1016/S0079-
6123(06)59028-4
Rizzolatti G, 2004, ANNU REV NEUROSCI, V27, P169, DOI
10.1146/annurev.neuro.27.070203.144230
Schuster C, 2011, BMC MED, V9, DOI 10.1186/1741-7015-9-75
Spence C, 2015, CLIN SYSTEMS NEUROSC, P151
Thobbi A., 2010, INT J ARITF INTELL M, V10, P41
Townsend G, 2004, IEEE T NEUR SYS REH, V12, P258, DOI 10.1109/TNSRE.2004.827220
Tsakiris M, 2010, EXP BRAIN RES, V204, P343, DOI 10.1007/s00221-009-2039-3
Yin EW, 2015, IEEE T BIO-MED ENG, V62, P1447, DOI 10.1109/TBME.2014.2320948
Yin EW, 2013, J NEURAL ENG, V10, DOI 10.1088/1741-2560/10/2/026012
NR 69
TC 1
Z9 1
U1 5
U2 18
PU PUBLIC LIBRARY SCIENCE
PI SAN FRANCISCO
PA 1160 BATTERY STREET, STE 100, SAN FRANCISCO, CA 94111 USA
SN 1932-6203
J9 PLOS ONE
JI PLoS One
PD SEP 6
PY 2016
VL 11
IS 9
AR e0161945
DI 10.1371/journal.pone.0161945
PG 17
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA DV9IT
UT WOS:000383254800027
PM 27598310
OA gold
DA 2018-01-22
ER

PT J
AU Shimoyama, R
Fukuda, R
AF Shimoyama, Ryuichi
Fukuda, Reo
TI Room Volume Estimation Based on Ambiguity of Short-Term Interaural Phase
Differences Using Humanoid Robot Head
SO ROBOTICS
LA English
DT Article
DE room volume; estimation; short-term IPD; biologically inspired; binaural
audition; humanoid robot; PACS; J0101
ID IMPULSE-RESPONSE; SOUND SOURCES; COHERENCE
AB Humans can recognize approximate room size using only binaural audition.
However, sound reverberation is not negligible in most environments. The
reverberation causes temporal fluctuations in the short-term interaural phase
differences (IPDs) of sound pressure. This study proposes a novel method for a
binaural humanoid robot head to estimate room volume. The method is based on the
statistical properties of the short-term IPDs of sound pressure. The humanoid robot
turns its head toward a sound source, recognizes the sound source, and then
estimates the ego-centric distance by its stereovision. By interpolating the
relations between room volume, average standard deviation, and ego-centric distance
experimentally obtained for various rooms in a prepared database, the room volume
was estimated by the binaural audition of the robot from the average standard
deviation of the short-term IPDs at the estimated distance.
C1 [Shimoyama, Ryuichi] Nihon Univ, Coll Ind Technol, Tokyo 2758575, Japan.
[Fukuda, Reo] Canon Elect Inc, Syst Res Div, Image Informat Syst Lab, Tokyo
1050011, Japan.
RP Shimoyama, R (reprint author), Nihon Univ, Coll Ind Technol, Tokyo 2758575,
Japan.
EM shimoyama.ryuichi@nihon-u.ac.jp; fukuda.reo@canon-elec.co.jp
CR Asano F, 2013, IEEE T AUDIO SPEECH, V21, P1953, DOI 10.1109/TASL.2013.2263140
Bronkhorst AW, 1999, NATURE, V397, P517, DOI 10.1038/17374
EATON J, 2015, INT CONF ACOUST SPEE, P46
Georganti E., 2014, P 2014 IEEE INT C AC, P42
Georganti E, 2013, IEEE T AUDIO SPEECH, V21, P1727, DOI
10.1109/TASL.2013.2260155
Hartmann WM, 2005, ACTA ACUST UNITED AC, V91, P451
Hioka Y., 2016, P 2016 IEEE INT C AC, P149
Hu JS, 2009, IEEE T AUDIO SPEECH, V17, P682, DOI 10.1109/TASL.2008.2011528
JETZT JJ, 1979, J ACOUST SOC AM, V65, P1204, DOI 10.1121/1.382786
Jeub M., 2009, P INT C DIG SIGN PRO, P1
Kearney G., 2009, P ISSC, V2009, P1
Kido K., 1998, J ACOUST SOC JPN E, V19, P249
Kuster M, 2008, J ACOUST SOC AM, V124, P982, DOI 10.1121/1.2940585
Kuster M, 2011, J ACOUST SOC AM, V130, P3781, DOI 10.1121/1.3658446
Larsen E., 2004, P SIGN SYST COMP, V1, P725
Nakamura K, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P664, DOI 10.1109/IROS.2009.5354419
Nix J, 2006, J ACOUST SOC AM, V119, P463, DOI 10.1121/1.2139619
Shabtai N.R., 2009, P 2009 IEEE WORKSH S, P717
Shabtai N.R., 2009, P 2009 IEEE WORKSH A
Shimoyama Ryuichi, 2009, Acoustical Science and Technology, V30, P199, DOI
10.1250/ast.30.1991
Shimoyama R., 2014, P 23 INT C ROB ALP A, P1
Shinn-Cunningham BG, 2005, J ACOUST SOC AM, V117, P3100, DOI 10.1121/1.1872572
Vesa S, 2009, IEEE T AUDIO SPEECH, V17, P1498, DOI 10.1109/TASL.2009.2022001
NR 23
TC 0
Z9 0
U1 1
U2 4
PU MDPI AG
PI BASEL
PA ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND
SN 2218-6581
J9 ROBOTICS
JI Robotics
PD SEP
PY 2016
VL 5
IS 3
AR 16
DI 10.3390/robotics5030016
PG 15
WC Robotics
SC Robotics
GA DT9ZN
UT WOS:000381861100005
OA gold
DA 2018-01-22
ER

PT J
AU Ogata, K
Kawamura, T
Ono, E
Nakayama, T
Matsuhira, N
AF Ogata, Kunihiro
Kawamura, Tomoya
Ono, Eiichi
Nakayama, Tsuyoshi
Matsuhira, Nobuto
TI Upper Body of Dummy Humanoid Robot with Exterior Deformation Mechanism
for Evaluation of Assistive Products and Technologies
SO JOURNAL OF ROBOTICS AND MECHATRONICS
LA English
DT Article
DE humanoid robots; dummy robots; spinal cord injury; dressing
ID PHYSICAL-ACTIVITY
AB People suffering from tetraplegia are unable to perform many activities of daily
living, such as dressing and toileting. The purpose of this study is to develop a
dummy humanoid robot to assess assistive products and technologies for patients of
tetraplegia. This paper describes the mechanism, motion planning, and sensing
system of the dummy robot. The proposed dummy robot has upper-arm mechanisms that
simulate the human collarbones based on a functional anatomy. To realize a variety
of body shapes, the proposed robot has deformation mechanisms that use linear
actuators and rotating servo-motors. The sensing system of the dummy robot can
measure clothing pressure before and after exterior deformation is measured, and
can hence detect changes in it.
C1 [Ogata, Kunihiro; Ono, Eiichi; Nakayama, Tsuyoshi] Res Inst, Natl Rehabil Ctr
Persons Disabil, 4-1 Namiki, Tokorozawa, Saitama 3598555, Japan.
[Kawamura, Tomoya; Matsuhira, Nobuto] Shibaura Inst Technol, Coll Engn, Dept
Engn Sci & Mech, Koto Ku, 3-7-5 Toyosu, Tokyo 1358548, Japan.
RP Ogata, K (reprint author), Res Inst, Natl Rehabil Ctr Persons Disabil, 4-1
Namiki, Tokorozawa, Saitama 3598555, Japan.
EM ogata-kunihiro@rehab.go.jp; ono-eiichi@rehab.go.jp;
nakayama-tsuyoshi@rehab.go.jp; matsuhir@shibaura-it.ac.jp
FU JSPS KAKENHI [25282181]
FX This work was supported by JSPS KAKENHI Grant Number 25282181.
CR Abels A., 2013, J AUTOMATION CONTROL, V1, P132
Buchholz AC, 2003, OBES RES, V11, P563, DOI 10.1038/oby.2003.79
Chen D., 2004, CLOTHING RES J, V2, P61
GOSNELL DJ, 1987, NURS CLIN N AM, V22, P399
Hayashi M, 2008, DEV HUMANOID DIS TRI, DOI [10.5772/6629, DOI 10.5772/6629]
Hetz SP, 2009, SPINAL CORD, V47, P550, DOI 10.1038/sc.2008.160
Ikemoto S, 2012, IEEE INT C INT ROBOT, P4892, DOI 10.1109/IROS.2012.6385950
Imamura Y, 2014, 2014 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION
(SII), P761, DOI 10.1109/SII.2014.7028134
Ishii H., 2013, T JAPANESE SOC MED B, V51, pM
Jongjit Jithathai, 2004, Southeast Asian Journal of Tropical Medicine and Public
Health, V35, P980
Kapandji I. A., 2005, ANATOMIE FONCTIIONNE
Li HY, 2009, PROCEEDINGS OF THE FIBER SOCIETY 2009 SPRING CONFERENCE, VOLS I AND
II, P1089
Maggiali M., 2008, MECHATRONICS 2008, P1
Maynard FM, 1997, SPINAL CORD, V35, P266
Ohmura Y., 2007, IEEE RSJ INT C INT R, P1136
Ohmura Y, 2006, IEEE INT CONF ROBOT, P1348, DOI 10.1109/ROBOT.2006.1641896
Sugihara Tomomichi, 2009, 2009 9th IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2009), P555, DOI 10.1109/ICHR.2009.5379515
Tanaka M, 2010, CMES-COMP MODEL ENG, V62, P265
NR 18
TC 0
Z9 0
U1 0
U2 0
PU FUJI TECHNOLOGY PRESS LTD
PI TOKYO
PA 4F TORANOMON SANGYO BLDG, 2-29, TORANOMON 1-CHOME, MINATO-KU, TOKYO,
105-0001, JAPAN
SN 0915-3942
EI 1883-8049
J9 J ROBOT MECHATRON
JI J. Robot. Mechatron.
PD AUG
PY 2016
VL 28
IS 4
BP 600
EP 608
DI 10.20965/jrm.2016.p0600
PG 9
WC Robotics
SC Robotics
GA EJ8DU
UT WOS:000393456000018
DA 2018-01-22
ER

PT J
AU Kulic, D
Venture, G
Yamane, K
Demircan, E
Mizuuchi, I
Mombaur, K
AF Kulic, Dana
Venture, Gentiane
Yamane, Katsu
Demircan, Emel
Mizuuchi, Ikuo
Mombaur, Katja
TI Anthropomorphic Movement Analysis and Synthesis: A Survey of Methods and
Applications
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Anthropomorphic modeling; control; dynamics; kinematics; movement
science
ID DARPA ROBOTICS CHALLENGE; PASSIVE-DYNAMIC WALKING; SEGMENT INERTIAL
PARAMETERS; HUMAN JOINT MOTION; LOCOMOTION CONTROL; PART 1;
MUSCULOSKELETAL SYSTEM; REHABILITATION SYSTEM; OPTIMALITY CONDITIONS;
MUSCLE CONTRIBUTIONS
AB The anthropomorphic body form is a complex articulated system of links/limbs and
joints, simultaneously redundant and underactuated, and capable of a wide range of
sophisticated movement. The human body and its movement have long been a topic of
study in physiology, anatomy, biomechanics, and neuroscience and have served as
inspiration for humanoid robot design and control. This survey paper reviews the
literature on robotics research using anthropomorphic design principles as an
inspiration, at both the design and control levels. Next, anthropomorphic body
modeling, motion analysis, and synthesis techniques are overviewed. Finally, key
applications arising at the intersection of robotics and human movement science are
introduced. The survey ends with a discussion of open research questions and
directions for future work.
C1 [Kulic, Dana] Univ Waterloo, Waterloo, ON N2L 3G1, Canada.
[Venture, Gentiane; Mizuuchi, Ikuo] Tokyo Univ Agr & Technol, Tokyo 1830057,
Japan.
[Yamane, Katsu] Disney Res, Pittsburgh, PA 15213 USA.
[Demircan, Emel] Calif State Univ Long Beach, Long Beach, CA 90840 USA.
[Mombaur, Katja] Heidelberg Univ, D-69117 Heidelberg, Germany.
RP Kulic, D (reprint author), Univ Waterloo, Waterloo, ON N2L 3G1, Canada.
EM dana.kulic@uwaterloo.ca; venture@cc.tuat.ac.jp;
kyamane@disneyresearch.com; emel.demircan@csulb.edu;
mizuuchi@cc.tuat.ac.jp; katja.mombaur@iwr.uni-heidelberg.de
RI Venture, Gentiane/E-7060-2013; Mizuuchi, Ikuo/B-9946-2013
OI Venture, Gentiane/0000-0001-7767-4765; Kulic, Dana/0000-0002-4169-2141
CR Ackermann M., 2009, MULTIBODY DYNAMICS, P1
Aghasadeghi N, 2011, IEEE INT C INT ROBOT, P1561, DOI 10.1109/IROS.2011.6048804
Albrecht S., 2010, HAPTICS GENERATING P
Albu-Schaffer A, 2008, IEEE ROBOT AUTOM MAG, V15, P20, DOI
10.1109/MRA.2008.927979
Alessandro C., 2016, EMERGING THERAPIES N, VII, P225
Alessandro C, 2013, FRONT COMPUT NEUROSC, V7, DOI 10.3389/fncom.2013.00043
ALEXANDER RM, 1984, INT J ROBOT RES, V3, P49, DOI 10.1177/027836498400300205
ALEXANDER RM, 1977, NATURE, V265, P114, DOI 10.1038/265114a0
Alissandrakis A, 2002, IEEE T SYST MAN CY A, V32, P482, DOI
10.1109/TSMCA.2002.804820
Ananthanarayan S., 2014, P INT C PERV COMP TE, P101
Andrikopoulos G., 2011, P MED C CONTR AUT, P1439
Aoi S, 2006, IEEE T ROBOT, V22, P391, DOI 10.1109/TRO.2006.870671
Aoi S, 2005, AUTON ROBOT, V19, P219, DOI 10.1007/s10514-005-4051-1
Arikan O, 2006, ACM T GRAPHIC, V25, P890, DOI 10.1145/1141911.1141971
Arnold EM, 2010, ANN BIOMED ENG, V38, P269, DOI 10.1007/s10439-009-9852-5
Ascher U. M., 1998, NUMERICAL SOLUTION B
Atkeson CG, 2015, IEEE-RAS INT C HUMAN, P623, DOI 10.1109/HUMANOIDS.2015.7363436
ATKESON CG, 1986, INT J ROBOT RES, V5, P101, DOI 10.1177/027836498600500306
Audren H, 2014, IEEE INT C INT ROBOT, P4030, DOI 10.1109/IROS.2014.6943129
Ayoade M., 2013, P 14 IFIP TC 13 INT, P1
Ayusawa K., 2014, P 14 IEEE RAS INT C, P205
Ayusawa K, 2014, INT J ROBOT RES, V33, P446, DOI 10.1177/0278364913495932
Ayusawa K, 2014, MECH MACH THEORY, V74, P274, DOI
10.1016/j.mechmachtheory.2013.12.015
Baddoura R, 2014, FRONT NEUROROBOTICS, V8, DOI 10.3389/fnbot.2014.00012
Baerlocher P., 2001, THESIS
Bastian AJ, 2008, CURR OPIN NEUROL, V21, P628, DOI 10.1097/WCO.0b013e328315a293
Bauby CE, 2000, J BIOMECH, V33, P1433, DOI 10.1016/S0021-9290(00)00101-9
Bellmann M, 2012, ARCH PHYS MED REHAB, V93, P541, DOI 10.1016/j.apmr.2011.10.017
Bertsekas D., 2005, DYNAMIC PROGRAMMING, VI
Bessonnet G, 2004, INT J ROBOT RES, V23, P1059, DOI 10.1177/0278364904047393
Billard A., 2008, HDB ROBOTICS, P1371
Bock H., 1984, P 9 IFAC WORLD C BUD, P242
Bonnet V., 2016, P IEEE RAS EMBS INT, P952
Bonnet V, 2015, IEEE T NEUR SYS REH, V23, P628, DOI 10.1109/TNSRE.2015.2405087
Buschmann T, 2015, BIOINSPIR BIOMIM, V10, DOI 10.1088/1748-3190/10/4/041001
Buss M., 2003, P INT C HUM ROB, P2491
Calinon S, 2007, IMITATION AND SOCIAL LEARNING IN ROBOTS, HUMANS AND ANIMALS:
BEHAVIOURAL, SOCIAL AND COMMUNICATIVE DIMENSIONS, P153, DOI
10.1017/CBO9780511489808.012
Callegaro A., 2014, TRENDS AUGMENTATION, V2, P265
Chleboun GS, 2001, CELLS TISSUES ORGANS, V169, P401, DOI 10.1159/000047908
Christophy M, 2012, BIOMECH MODEL MECHAN, V11, P19, DOI 10.1007/s10237-011-0290-
6
Clever D., 2016, P IEEE RAS EMBS INT, P605
Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Collins SH, 2001, INT J ROBOT RES, V20, P607, DOI 10.1177/02783640122067561
Coros S, 2010, ACM T GRAPHIC, V29, DOI 10.1145/1778765.1781156
da Silva M, 2008, ACM T GRAPHIC, V27, DOI 10.1145/1360612.1360681
Damsgaard M, 2006, SIMUL MODEL PRACT TH, V14, P1100, DOI
10.1016/j.simpat.2006.09.001
Silva M. Da, 2009, ACM T GRAPHIC, V28
De Groote F, 2008, J BIOMECH, V41, P3390, DOI 10.1016/j.jbiomech.2008.09.035
de Leva P, 1996, J BIOMECH, V29, P1223, DOI 10.1016/0021-9290(95)00178-6
de Rugy A, 2013, FRONT COMPUT NEUROSC, V7, DOI 10.3389/fncom.2013.00019
Delp SL, 2007, IEEE T BIO-MED ENG, V54, P1940, DOI 10.1109/TBME.2007.901024
Delp SL, 2000, COMPUT SCI ENG, V2, P46, DOI 10.1109/5992.877394
Demircan E., 2009, P C INT SOC BIOM
Demircan E, 2010, ADVANCES IN ROBOT KINEMATICS: MOTION IN MAN AND MACHINE, P283,
DOI 10.1007/978-90-481-9262-5_30
Dempe S, 2007, J GLOBAL OPTIM, V39, P529, DOI 10.1007/s10898-007-9154-0
Derr A., 2015, ROB SCI SYST C ROM I
Diehl M, 2009, LECT NOTES CONTR INF, V384, P391
Ding Kai, 2015, P 14 ACM SIGGRAPH EU, P83
DOSTAL WF, 1981, J BIOMECH, V14, P803, DOI 10.1016/0021-9290(81)90036-1
Dragan AD, 2013, ACMIEEE INT CONF HUM, P301, DOI 10.1109/HRI.2013.6483603
DRILLIS R, 1964, Artif Limbs, V8, P44
Dumas R, 2007, J BIOMECH, V40, P543, DOI 10.1016/j.jbiomech.2006.02.013
Durkin JL, 2003, J BIOMECH ENG-T ASME, V125, P515, DOI 10.1115/1.1590359
Nguyen-Tuong D, 2011, COGN PROCESS, V12, P319, DOI 10.1007/s10339-011-0404-1
Eilenberg MF, 2010, IEEE T NEUR SYS REH, V18, P164, DOI
10.1109/TNSRE.2009.2039620
Faraji S, 2014, IEEE INT CONF ROBOT, P1943, DOI 10.1109/ICRA.2014.6907116
Farris DJ, 2013, J APPL PHYSIOL, V115, P579, DOI 10.1152/japplphysiol.00253.2013
FEL'DMAN A. G., 1966, BIOFIZIKA, V11, P498
Felis M. L., 2016, P IEEE INT C ROB AUT, P1560
Felis M. L., 2015, THESIS
FLASH T, 1985, J NEUROSCI, V5, P1688
Freund Y, 1997, J COMPUT SYST SCI, V55, P119, DOI 10.1006/jcss.1997.1504
Garner BA, 2003, ANN BIOMED ENG, V31, P207, DOI 10.1114/1.1540105
Garner B., 2000, COMPUT METHOD BIOMEC, V3, P1, DOI DOI 10.1080/1025584000
Geijtenbeek T, 2013, ACM T GRAPHIC, V32, DOI 10.1145/2508363.2508399
Gerstner W, 2014, NEURONAL DYNAMICS: FROM SINGLE NEURONS TO NETWORKS AND MODELS
OF COGNITION, P1, DOI 10.1017/CBO9781107447615
Geyer H, 2010, IEEE T NEUR SYS REH, V18, P263, DOI 10.1109/TNSRE.2010.2047592
Guenter F., 2007, P IEEE RSJ INT C INT, P1022
Guertin Pierre A, 2012, Front Neurol, V3, P183, DOI 10.3389/fneur.2012.00183
Guertin PA, 2009, BRAIN RES REV, V62, P45, DOI 10.1016/j.brainresrev.2009.08.002
Haddadin S, 2009, ROBOT AUTON SYST, V57, P761, DOI 10.1016/j.robot.2009.03.004
Ham R. V., 2007, ROBOT AUTON SYST, V55, P761
Hamalainen Perttu, 2015, ACM T GRAPHIC, V34
Hamalainen P., 2014, ACM T GRAPHIC, V33
Hamner SR, 2010, J BIOMECH, V43, P2709, DOI 10.1016/j.jbiomech.2010.06.025
Han JG, 2013, IEEE T CYBERNETICS, V43, P1318, DOI 10.1109/TCYB.2013.2265378
Harada K., 2010, MOTION PLANNING HUMA
Hardt M., 1999, Proceedings of the 38th IEEE Conference on Decision and Control
(Cat. No.99CH36304), P2999, DOI 10.1109/CDC.1999.831393
Haruno M, 2001, NEURAL COMPUT, V13, P2201, DOI 10.1162/089976601750541778
Hatz K., 2014, THESIS
Hatz K, 2012, SIAM J SCI COMPUT, V34, pA1707, DOI 10.1137/110823390
Hauser H, 2011, BIOL CYBERN, V104, P235, DOI 10.1007/s00422-011-0430-1
Henze B, 2014, IEEE INT C INT ROBOT, P3253, DOI 10.1109/IROS.2014.6943014
Heuberger C, 2004, J COMB OPTIM, V8, P329, DOI
10.1023/B:JOCO.0000038914.26975.9b
Hicks J, 2007, GAIT POSTURE, V26, P546, DOI 10.1016/j.gaitpost.2006.12.003
Hill AV, 1938, PROC R SOC SER B-BIO, V126, P136, DOI 10.1098/rspb.1938.0050
HIROSE S, 1989, PROCEEDINGS - 1989 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOL 1-3, P1610, DOI 10.1109/ROBOT.1989.100208
Hoang KLH, 2015, J BIOMECH, V48, P3732, DOI 10.1016/j.jbiomech.2015.08.018
Hodgins J. K., 1998, Robotics Research. Eighth International Symposium, P356
HOGAN N, 1984, IEEE T AUTOMAT CONTR, V29, P681, DOI 10.1109/TAC.1984.1103644
Hogan N., 1985, ASME, V107, P1, DOI DOI 10.1115/1.3140702
Hondo T, 2015, IEEE INT CONF ROBOT, P756, DOI 10.1109/ICRA.2015.7139263
Hosoda K, 2012, ADV ROBOTICS, V26, P729, DOI 10.1163/156855312X625371
Houmanfar R., 2014, IEEE SYST J IN PRESS
Houska B, 2011, OPTIM CONTR APPL MET, V32, P298, DOI 10.1002/oca.939
HOY MG, 1990, J BIOMECH, V23, P157, DOI 10.1016/0021-9290(90)90349-8
Huang P, 2015, ACM T GRAPHIC, V34, DOI 10.1145/2699643
Ijspeert AJ, 2008, NEURAL NETWORKS, V21, P642, DOI 10.1016/j.neunet.2008.03.014
Ijspeert AJ, 2013, NEURAL COMPUT, V25, P328, DOI 10.1162/NECO_a_00393
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Iwata H., 2009, P IEEE INT C ROB AUT, P580
JENSEN RK, 1978, J BIOMECH, V11, P349, DOI 10.1016/0021-9290(78)90069-6
Johnson M, 2015, J FIELD ROBOT, V32, P192, DOI 10.1002/rob.21571
Kadone H., 2008, CURRNT BIOL, V18, P329
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
KAJITA S, 1995, IEEE INT CONF ROBOT, P2885, DOI 10.1109/ROBOT.1995.525693
Kalman R.K., 1964, Transactions of the ASME. Series D, Journal of Basic
Engineering, V86, P51
Kaneko K, 2015, IEEE-RAS INT C HUMAN, P132, DOI 10.1109/HUMANOIDS.2015.7363526
Kapadia M, 2016, PROCEEDINGS I3D 2016: 20TH ACM SIGGRAPH SYMPOSIUM ON
INTERACTIVE 3D GRAPHICS AND GAMES, P29, DOI 10.1145/2856400.2856404
Kapandji IA, 2007, PHYSL JOINTS, V1
Kapandji I. A., 2008, PHYSL JOINTS, V3
Kapandji I. A., 2010, PHYSL JOINTS, V2
Karg M, 2013, IEEE T AFFECT COMPUT, V4, P341, DOI 10.1109/T-AFFC.2013.29
Karg M, 2010, IEEE T SYST MAN CY B, V40, P1050, DOI 10.1109/TSMCB.2010.2044040
Kato I., 1972, P INT S EXT CONTR HU, P42
Kato Y, 2015, ACMIEEE INT CONF HUM, P35, DOI 10.1145/2696454.2696463
Kazerooni H., 2008, SPRINGER HDB ROBOTIC, P773
Kemp C., 2008, SPRINGER HDB ROBOTIC, P1307
Khalil W., 2002, MODELING IDENTIFICAT
Khatib O, 2009, J PHYSIOLOGY-PARIS, V103, P211, DOI
10.1016/j.jphysparis.2009.08.004
Kirches C, 2012, J PROCESS CONTR, V22, P540, DOI 10.1016/j.jprocont.2012.01.008
Ko H, 1996, IEEE COMPUT GRAPH, V16, P50, DOI 10.1109/38.486680
Kober J, 2013, INT J ROBOT RES, V32, P1238, DOI 10.1177/0278364913495721
Koch KH, 2015, IEEE-RAS INT C HUMAN, P866, DOI 10.1109/HUMANOIDS.2015.7363463
Koenemann J, 2015, IEEE INT C INT ROBOT, P3346, DOI 10.1109/IROS.2015.7353843
Kolev S, 2015, IEEE-RAS INT C HUMAN, P1036, DOI 10.1109/HUMANOIDS.2015.7363481
Koo TKK, 2002, CLIN BIOMECH, V17, P390, DOI 10.1016/S0268-0033(02)00031-1
Koolen T, 2012, INT J ROBOT RES, V31, P1094, DOI 10.1177/0278364912452673
KOOPMAN B, 1995, J BIOMECH, V28, P1369, DOI 10.1016/0021-9290(94)00185-7
Korkel S, 2004, OPTIM METHOD SOFTW, V19, P327, DOI 10.1080/10556780410001683078
Kovar L, 2002, ACM T GRAPHIC, V21, P473
Kuindersma S, 2016, AUTON ROBOT, V40, P429, DOI 10.1007/s10514-015-9479-3
Kulic D, 2012, INT J ROBOT RES, V31, P330, DOI 10.1177/0278364911426178
Kulic D, 2009, IEEE T ROBOT, V25, P1158, DOI 10.1109/TRO.2009.2026508
Kuo AD, 2002, MOTOR CONTROL, V6, P129, DOI 10.1123/mcj.6.2.129
Kuo AD, 2005, J NEURAL ENG, V2, DOI 10.1088/1741-2560/2/3/S07
Lam AWK, 2016, HUM-COMPUT INTERACT, V31, P294, DOI 10.1080/07370024.2015.1093419
Lambrecht B. G., 2009, P IEEE INT C ROB AUT, P639
Latash ML, 2007, MOTOR CONTROL, V11, P276, DOI 10.1123/mcj.11.3.276
Lawson BE, 2013, IEEE T NEUR SYS REH, V21, P466, DOI 10.1109/TNSRE.2012.2225640
Lee MK, 2009, J BIOMECH, V42, P217, DOI 10.1016/j.jbiomech.2008.10.036
Lee SH, 2005, IEEE T ROBOT, V21, P657, DOI 10.1109/TRO.2004.842336
Leineweber DB, 2003, COMPUT CHEM ENG, V27, P157, DOI 10.1016/S0098-
1354(02)00158-8
Lengagne S, 2013, INT J ROBOT RES, V32, P1104, DOI 10.1177/0278364913478990
Levine S, 2013, P 30 INT C MACH LEAR, P1
Lin HC, 2015, IEEE INT CONF ROBOT, P2613, DOI 10.1109/ICRA.2015.7139551
Liu CK, 2005, ACM T GRAPHIC, V24, P1071, DOI 10.1145/1073204.1073314
Liu LB, 2012, ACM T GRAPHIC, V31, DOI 10.1145/2366145.2366173
Liu MQ, 2008, J BIOMECH, V41, P3243, DOI 10.1016/j.jbiomech.2008.07.031
Lo J, 1999, COMP ANIM CONF PROC, P220, DOI 10.1109/CA.1999.781215
Luo ZQ, 1996, MATH PROGRAMS EQUILI
[Anonymous], 2011, VISUAL3D BIOM RES SO
Maciejasz P, 2014, J NEUROENG REHABIL, V11, DOI 10.1186/1743-0003-11-3
Makssoud H., 2004, P IEEE INT C ROB AUT, V2, P1282
Mandery C, 2015, IEEE-RAS INT C HUMAN, P1020, DOI 10.1109/HUMANOIDS.2015.7363479
Marchal-Crespo L, 2009, J NEUROENG REHABIL, V6, DOI 10.1186/1743-0003-6-20
Marques H. G., 2010, P IEEE RAS INT C HUM, P391, DOI DOI
10.1109/ICHR.2010.5686344
Martinez-Vilialpando EC, 2009, J REHABIL RES DEV, V46, P361, DOI
10.1682/JRRD.2008.09.0131
MCGEER T, 1990, INT J ROBOT RES, V9, P62, DOI 10.1177/027836499000900206
McGuan S., 2001, 2001012086 SAE, DOI [10.4271/2001-01-2086, DOI 10.4271/2001-01-
2086]
Miossec S., 2006, P 2006 IEEE INT C RO, P299
Miura K., 2009, P IEEE RAS INT C HUM, P596, DOI DOI 10.1109/ICHR.2009.5379535
Mizuuchi I, 2005, IEEE-RAS INT C HUMAN, P339
Mizuuchi I, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2527, DOI 10.1109/IRDS.2002.1041649
Mizuuchi I, 2006, IEEE INT CONF ROBOT, P82, DOI 10.1109/ROBOT.2006.1641165
Mizuuchi I, 2007, IEEE-RAS INT C HUMAN, P294, DOI 10.1109/ICHR.2007.4813883
Moeslund TB, 2006, COMPUT VIS IMAGE UND, V104, P90, DOI
10.1016/j.cviu.2006.08.002
Mombaur K., 2010, AUTON ROBOT, V28, P369
Mombaur K., 2012, MODELING SIMULATION
Mombaur K, 2009, ROBOTICA, V27, P321, DOI 10.1017/S0263574708004724
Mombaur KD, 2005, ZAMM-Z ANGEW MATH ME, V85, P499, DOI 10.1002/zamm.200310190
Mombaur KD, 2001, IEEE INT CONF ROBOT, P4128, DOI 10.1109/ROBOT.2001.933263
Morimoto J, 2006, IEEE INT CONF ROBOT, P1579, DOI 10.1109/ROBOT.2006.1641932
Morita T, 1997, IEEE INT CONF ROBOT, P462, DOI 10.1109/ROBOT.1997.620080
Muico U, 2009, ACM T GRAPHIC, V28, DOI 10.1145/1531326.1531387
MURRAY WM, 1995, J BIOMECH, V28, P513, DOI 10.1016/0021-9290(94)00114-J
Nakamura Y., 1990, ADV ROBOTICS REDUNDA
Nakamura Y., 2003, P INT S AD MOT AN MA, pSaP
Nakano Y., 2012, P 2012 IEEE RAS INT, P1
Nakanishi Y, 2013, INT J ADV ROBOT SYST, V10, DOI 10.5772/55443
Nakaoka S, 2007, INT J ROBOT RES, V26, P829, DOI 10.1177/0278364907079430
Nelson G., 2012, J ROBOTICS SOC JAPAN, V30, P372
Neptune RR, 2009, J BIOMECH, V42, P1282, DOI 10.1016/j.jbiomech.2009.03.009
Ng AY, 2000, P 17 INT C MACH LEAR, P663
Niiyama R, 2010, P IEEE RAS INT C HUM, P498
Niiyama R, 2007, IEEE INT CONF ROBOT, P2546, DOI 10.1109/ROBOT.2007.363848
OATIS CA, 1993, PHYS THER, V73, P740, DOI 10.1093/ptj/73.11.740
Ofli F, 2014, J VIS COMMUN IMAGE R, V25, P24, DOI 10.1016/j.jvcir.2013.04.007
Ogawa K, 2011, P 2011 IEEE RSJ INT, P4838, DOI DOI 10.1109/IROS.2011.6095091
Ogawa Y., 2014, P IEEE RAS INT C HUM, P457
Oikonomidis I., 2011, P BRIT MACH VIS C, P1011
Okamoto T, 2014, IEEE T ROBOT, V30, P771, DOI 10.1109/TRO.2014.2300212
Or J, 2009, INT J SOC ROBOT, V1, P367, DOI 10.1007/s12369-009-0034-2
Ott C., 2008, P IEEE RAS INT C HUM, P399
PANDY MG, 1990, J BIOMECH, V23, P1185, DOI 10.1016/0021-9290(90)90376-E
Parietti F, 2015, IEEE INT CONF ROBOT, P5010, DOI 10.1109/ICRA.2015.7139896
Park C, 2016, PROCEEDINGS I3D 2016: 20TH ACM SIGGRAPH SYMPOSIUM ON INTERACTIVE
3D GRAPHICS AND GAMES, P39, DOI 10.1145/2856400.2856405
PARK FC, 1995, INT J ROBOT RES, V14, P609, DOI 10.1177/027836499501400606
Parmiggiani A., 2012, P IEEE RAS INT C HUM, P481
Peters J, 2008, NEURAL NETWORKS, V21, P682, DOI 10.1016/j.neunet.2008.02.003
Pfeifer S, 2015, IEEE-ASME T MECH, V20, P1384, DOI 10.1109/TMECH.2014.2337514
Pfeifer S, 2012, IEEE T BIO-MED ENG, V59, P2604, DOI 10.1109/TBME.2012.2207895
Pham Q., 2012, P IEEE RAS INT C HUM, P165
Pontryagin LS, 1962, MATH THEORY OPTIMAL
Poppe R, 2007, COMPUT VIS IMAGE UND, V108, P4, DOI 10.1016/j.cviu.2006.10.016
Pratt G, 1995, P IEEE RSJ INT C INT, V1, P399, DOI DOI 10.1109/IROS.1995.525827
Pratt G, 2013, IEEE ROBOT AUTOM MAG, V20, P10, DOI 10.1109/MRA.2013.2255424
Radkhah K, 2011, INT J HUM ROBOT, V8, P439, DOI 10.1142/S0219843611002587
Ratliff N, 2015, IEEE INT CONF ROBOT, P4202, DOI 10.1109/ICRA.2015.7139778
Rizzolatti G, 2004, ANNU REV NEUROSCI, V27, P169, DOI
10.1146/annurev.neuro.27.070203.144230
Roetenberg D., 2009, XSENS MVN FULL 6DOF
Roether CL, 2009, J VISION, V9, DOI 10.1167/9.6.15
Roussel L, 1998, IEEE INT CONF ROBOT, P2036, DOI 10.1109/ROBOT.1998.680615
Ruiz A. L. C., 2015, P 8 ACM SIGGRAPH C M, P65
Safonova A, 2004, ACM T GRAPHIC, V23, P514, DOI 10.1145/1015706.1015754
Sakaguchi S, 2012, P IEEE RAS-EMBS INT, P1644, DOI 10.1109/BioRob.2012.6290727
Sartori M, 2016, IEEE T BIO-MED ENG, V63, P879, DOI 10.1109/TBME.2016.2538296
Schaal S, 2003, PHILOS T R SOC B, V358, P537, DOI 10.1098/rstb.2002.1258
Schaal S, 2010, IEEE ROBOT AUTOM MAG, V17, P20, DOI 10.1109/MRA.2010.936957
Schaarschmidt M, 2012, HUM MOVEMENT SCI, V31, P907, DOI
10.1016/j.humov.2011.09.004
Schultz G, 2010, IEEE-ASME T MECH, V15, P783, DOI 10.1109/TMECH.2009.2035112
Sentis L, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, P453, DOI 10.1109/IROS.2009.5356084
Shin JH, 2014, J NEUROENG REHABIL, V11, DOI 10.1186/1743-0003-11-32
Siciliano B., 1991, ROBOT FORCE CONTROL
Sok KW, 2007, ACM T GRAPHIC, V26, DOI 10.1145/1239451.1239558
Steinbach M. C., 1997, SC9712 ZIB
Stroeve S, 1996, BIOL CYBERN, V75, P73
Sugihara T, 2011, IEEE T ROBOT, V27, P984, DOI 10.1109/TRO.2011.2148230
Suleiman W, 2007, IEEE-RAS INT C HUMAN, P180, DOI 10.1109/ICHR.2007.4813866
TAGA G, 1991, BIOL CYBERN, V65, P147, DOI 10.1007/BF00198086
Tang R., 2015, P 33 ANN ACM C HUM F, P4123
Thelen DG, 2003, J BIOMECH, V36, P321, DOI 10.1016/S0021-9290(02)00432-3
Todorov E., 2009, P ADV NEUR INF PROC, P1856
Tresch MC, 2009, CURR OPIN NEUROBIOL, V19, P601, DOI 10.1016/j.conb.2009.09.002
Tsagarakis N. G., 2013, P IEEE INT C ROB AUT, P665
Tsianos GA, 2014, J NEURAL ENG, V11, DOI 10.1088/1741-2560/11/5/056006
Uemura M, 2007, IEEE INT CONF ROBOT, P1437, DOI 10.1109/ROBOT.2007.363186
Uzor S., 2013, P SIGCHI C HUM FACT, P1233
Venture G., 2009, P IEEE INT C ENG MED, P5246
Venture G., 2009, P IEEE RAS INT C ROB, P1226
Venture G, 2014, INT J SOC ROBOT, V6, P621, DOI 10.1007/s12369-014-0243-1
Venture G, 2009, INT J ROBOT RES, V28, P1322, DOI 10.1177/0278364909103786
Verrelst B, 2005, AUTON ROBOT, V18, P201, DOI 10.1007/s10514-005-0726-x
von Stryk O., 1994, FORTSCHRITT BERICHTE
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Wakimoto S, 2011, IEEE ASME INT C ADV, P457, DOI 10.1109/AIM.2011.6027056
Wang JM, 2012, ACM T GRAPHIC, V31, DOI 10.1145/2185520.2185521
Wang ZK, 2013, INT J ROBOT RES, V32, P841, DOI 10.1177/0278364913478447
Webber B.L., 1993, SIMULATING HUMANS CO, P68
Wieber P.-B., 2015, HDB ROBOTICS
Winter DA, 2009, BIOMECHANICS MOTOR C
WINTERS JM, 1987, BIOL CYBERN, V55, P403, DOI 10.1007/BF00318375
Wisse M, 2004, ROBOTICA, V22, P681, DOI 10.1017/S0263574704000475
Wolf S, 2008, IEEE INT CONF ROBOT, P1741, DOI 10.1109/ROBOT.2008.4543452
Wolpert DM, 2011, NAT REV NEUROSCI, V12, P739, DOI 10.1038/nrn3112
Wu G, 2005, J BIOMECH, V38, P981, DOI 10.1016/j.jbiomech.2004.05.042
Wu G, 2002, J BIOMECH, V35, P543, DOI 10.1016/S0021-9290(01)00222-6
Wu J, 2010, ROBOT CIM-INT MANUF, V26, P414, DOI 10.1016/j.rcim.2010.03.013
Yabuki T., 2015, P IEEE RSJ INT C INT, P4251
Yamane K, 2004, ACM T GRAPHIC, V23, P532, DOI 10.1145/1015706.1015756
Yamane K, 2003, IEEE T ROBOTIC AUTOM, V19, P421, DOI 10.1109/TRA.2003.810579
Yamane K, 2003, IEEE T VIS COMPUT GR, V9, P352, DOI 10.1109/TVCG.2003.1207443
Yamane K., 2011, P IEEE RAS INT C HUM, P269
Yamane K, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, P2510, DOI 10.1109/IROS.2009.5354750
Ye JJ, 2005, J MATH ANAL APPL, V307, P350, DOI 10.1016/j.jmaa.2004.10.032
Yin KK, 2007, ACM T GRAPHIC, V26, DOI 10.1145/1239451.1239556
Zajac FE, 2003, GAIT POSTURE, V17, P1, DOI 10.1016/S0966-6362(02)00069-3
ZAJAC FE, 1989, CRIT REV BIOMED ENG, V17, P359
Zatsiorsky V.M., 1990, CONT PROBLEMS BIOMEC, P272
Zhang YJ, 2014, IEEE INT CONF ROBOT, P2086, DOI 10.1109/ICRA.2014.6907139
Zheng Y., 2013, P IEEE RAS INT C HUM, P34
Zordan V., 2002, P 2002 ACM SIGGRAPH, P89, DOI DOI 10.1145/545261.545276
NR 282
TC 1
Z9 1
U1 2
U2 16
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD AUG
PY 2016
VL 32
IS 4
BP 776
EP 795
DI 10.1109/TRO.2016.2587744
PG 20
WC Robotics
SC Robotics
GA DV2MO
UT WOS:000382754900002
DA 2018-01-22
ER

PT J
AU Bonnet, V
Fraisse, P
Crosnier, A
Gautier, M
Gonzalez, A
Venture, G
AF Bonnet, Vincent
Fraisse, Philippe
Crosnier, Andre
Gautier, Maxime
Gonzalez, Alejandro
Venture, Gentiane
TI Optimal Exciting Dance for Identifying Inertial Parameters of an
Anthropomorphic Structure
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Dynamics identification; exciting motion; human; humanoid robot
ID EXCITATION TRAJECTORIES; ROBOT EXCITATION; IDENTIFICATION; OPTIMIZATION;
DYNAMICS
AB Knowledge of the mass and inertial parameters of a humanoid robot or a human
being is crucial for the development of model-based control, as well as for
monitoring the rehabilitation process. These parameters are also important for
obtaining realistic simulations in the field of motion planning and human motor
control. For robots, they are often provided by computer-aided design data, while
averaged anthropometric table values are often used for human subjects. The
unit/subject-specific inertial parameters can be identified by using the external
wrench caused by the ground reaction. However, the identification accuracy
intrinsically depends on the excitation properties of the recorded motion. In this
paper, a new method for obtaining optimal excitation motions is proposed. This
method is based on the identification model of legged systems and on optimization
processes to generate excitation motions while handling mechanical constraints. A
pragmatic decomposition of this problem, the use of a new excitation criterion, and
a quadratic program to identify inertial parameters are proposed. The method has
been experimentally validated onto an HOAP-3 humanoid robot and with one human
subject.
C1 [Bonnet, Vincent; Venture, Gentiane] Tokyo Univ Agr & Technol, Dept Mech Syst
Engn, Tokyo 1830057, Japan.
[Fraisse, Philippe; Crosnier, Andre] Univ Montpellier, LIRMM UMR CNRS 5506, F-
34090 Montpellier, France.
[Gautier, Maxime] Univ Nantes, F-44300 Nantes, France.
[Gonzalez, Alejandro] Univ Montpellier, EuroMov EA 2991, F-34090 Montpellier,
France.
RP Bonnet, V (reprint author), Tokyo Univ Agr & Technol, Dept Mech Syst Engn, Tokyo
1830057, Japan.
EM bonnet.vincent@gmail.com; fraisse@lirmm.fr; crosnier@lirmm.fr;
Maxime.Gautier@irccyn.ec-nantes.fr; gonzalezde@lirmm.fr;
venture@cc.tuat.ac.jp
RI Venture, Gentiane/E-7060-2013
OI Venture, Gentiane/0000-0001-7767-4765
FU Japanese Society for Promotion of Science [P13559, K102604768]
FX This paper was recommended for publication by Associate Editor R. D.
Gregg and Guest Editor D. Kulic upon evaluation of the reviewers'
comments. This work was supported by the Japanese Society for Promotion
of Science (P13559 and Grant in Aid K102604768) and the authors'
universities.
CR Ayusawa K., 2011, P 2011 IEEE INT C RO, P6282
Ayusawa K, 2014, INT J ROBOT RES, V33, P446, DOI 10.1177/0278364913495932
Baelemans J., 2013, P IEEE INT S SAF SEC, P1
Benoussaad M, 2013, MED BIOL ENG COMPUT, V51, P617, DOI 10.1007/s11517-013-1032-
y
Bonnet V, 2015, IEEE T NEUR SYS REH, V23, P628, DOI 10.1109/TNSRE.2015.2405087
DUBOWSKY S, 1993, IEEE T ROBOTIC AUTOM, V9, P531, DOI 10.1109/70.258046
Dumas R, 2007, J BIOMECH, V40, P543, DOI 10.1016/j.jbiomech.2006.02.013
Fujimoto Y, 1998, IEEE INT CONF ROBOT, P2030, DOI 10.1109/ROBOT.1998.680613
GAUTIER M, 1992, INT J ROBOT RES, V11, P362, DOI 10.1177/027836499201100408
GAUTIER M, 1991, J ROBOTIC SYST, V8, P485, DOI 10.1002/rob.4620080405
Gautier M., 1998, P IEEE INT C ROB AUT, P2264
Gautier M, 2013, IEEE INT C INT ROBOT, P5815, DOI 10.1109/IROS.2013.6697198
Jin JF, 2015, ROBOT CIM-INT MANUF, V31, P21, DOI 10.1016/j.rcim.2014.06.004
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
Khalil W, 1997, ROBOTICA, V15, P153, DOI 10.1017/S0263574797000180
Khalil W., 2002, MODELING IDENTIFICAT
Lu TW, 1999, J BIOMECH, V32, P129, DOI 10.1016/S0021-9290(98)00158-4
Mayr J., 2014, P IEEE RAS INT C HUM, P99
Mistry M., 2009, P IEEE RAS INT C HUM, P492
Park KJ, 2006, ROBOTICA, V24, P625, DOI 10.1017/S0263574706002712
Popovic MB, 2005, INT J ROBOT RES, V24, P1013, DOI 10.1177/0278364905058363
Presse C., 1993, Proceedings IEEE International Conference on Robotics and
Automation (Cat. No.93CH3247-4), P907, DOI 10.1109/ROBOT.1993.292259
Rackl W, 2012, IEEE INT CONF ROBOT, P2042, DOI 10.1109/ICRA.2012.6225279
Robert T, 2013, J BIOMECH, V46, P2220, DOI 10.1016/j.jbiomech.2013.06.037
Rohmer E, 2013, IEEE INT C INT ROBOT, P1321, DOI 10.1109/IROS.2013.6696520
Swevers J, 1997, IEEE T ROBOTIC AUTOM, V13, P730, DOI 10.1109/70.631234
Venture G., 2009, P IEEE RSJ INT C INT, P1126
Venture G., 2008, P IEEE EMBS INT C EN, P4575
Xiang YJ, 2011, J BIOMECH, V44, P683, DOI 10.1016/j.jbiomech.2010.10.045
Yamane K., 2011, P IEEE RAS INT C HUM, P269
Plummer John C., 2007, INFORMS J COMPUT, V19, P328
NR 31
TC 2
Z9 2
U1 0
U2 5
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD AUG
PY 2016
VL 32
IS 4
BP 823
EP 836
DI 10.1109/TRO.2016.2583062
PG 14
WC Robotics
SC Robotics
GA DV2MO
UT WOS:000382754900005
DA 2018-01-22
ER

PT J
AU Brandao, M
Hashimoto, K
Santos-Victor, J
Takanishi, A
AF Brandao, Martim
Hashimoto, Kenji
Santos-Victor, Jose
Takanishi, Atsuo
TI Footstep Planning for Slippery and Slanted Terrain Using Human-Inspired
Models
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Biologically inspired robots; footstep planning; human gait; humanoid
robots; motion planning; path planning
ID MECHANICAL WORK; LEVEL WALKING; SURFACES; GAIT; OPTIMIZATION;
LOCOMOTION; PATTERNS; INFANTS; ROBOT
AB Energy efficiency and robustness of locomotion to different terrain conditions
are important problems for humanoid robots deployed in the real world. In this
paper, we propose a footstep-planning algorithm for humanoids that is applicable to
flat, slanted, and slippery terrain, which uses simple principles and
representations gathered from human gait literature. The planner optimizes a
center-of-mass (COM) mechanical work model subject to motion feasibility and ground
friction constraints using a hybrid A*search and optimization approach. Footstep
placements and orientations are discrete states searched with an A*algorithm, while
other relevant parameters are computed through continuous optimization on state
transitions. These parameters are also inspired by human gait literature and
include footstep timing (double-support and swing time) and parameterized COM
motion using knee flexion angle keypoints. The planner relies on work, the required
coefficient of friction (RCOF), and feasibility models that we estimate in a
physics simulation. We show through simulation experiments that the proposed
planner leads to both low electrical energy consumption and human-like motion on a
variety of scenarios. Using the planner, the robot automatically opts between
avoiding or (slowly) traversing slippery patches depending on their size and
friction, and it chooses energy-optimal stairs and climbing angles in slopes. The
obtained motion is also consistent with observations found in human gait
literature, such as human-like changes in RCOF, step length and double-support time
on slippery terrain, and human-like curved walking on steep slopes. Finally, we
compare COM work minimization with other choices of the objective function.
C1 [Brandao, Martim] Waseda Univ, Grad Sch Adv Sci & Engn, Tokyo 1620044, Japan.
[Hashimoto, Kenji] Waseda Univ, Waseda Inst Adv Study, Tokyo 1620044, Japan.
[Hashimoto, Kenji] Waseda Univ, Humanoid Robot Inst, Tokyo 1620044, Japan.
[Santos-Victor, Jose] Univ Lisbon, Inst Super Tecn, Inst Syst & Robot, P-1049001
Lisbon, Portugal.
[Takanishi, Atsuo] Waseda Univ, Modern Mech Engn, Tokyo 1698555, Japan.
[Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst, Tokyo 1698555, Japan.
RP Brandao, M (reprint author), Waseda Univ, Grad Sch Adv Sci & Engn, Tokyo
1620044, Japan.
EM mbrandao@fuji.waseda.jp; hashimoto@aoni.waseda.jp;
jasv@isr.tecnico.ulisboa.pt; takanisi@waseda.jp
RI Hashimoto, Kenji/M-6442-2017; Santos-VIctor, Jose/K-2093-2012
OI Hashimoto, Kenji/0000-0003-2300-6766; Santos-VIctor,
Jose/0000-0002-9036-1728; Brandao, Martim/0000-0002-2003-0675
FU Japan Society for the Promotion of Science [15J06497, 25220005]; ImPACT
TRC Program of Council for Science, Technology and Innovation, Cabinet
Office, Government of Japan; FCT [UID/EEA/50009/2013]; Project EU
Poeticon++
FX This paper was recommended for publication by Editor A. Kheddar and
Guest Editors K. Mombaur and D. Kulic upon evaluation of the reviewers'
comments. This work was supported by Japan Society for the Promotion of
Science Grants-in-Aid for Scientific Research under Grant 15J06497 and
Grant 25220005, and by the ImPACT TRC Program of Council for Science,
Technology and Innovation, Cabinet Office, Government of Japan. This
work was also supported by Project FCT [UID/EEA/50009/2013] and Project
EU Poeticon++.
CR Adolph KE, 2010, J EXP PSYCHOL HUMAN, V36, P797, DOI 10.1037/a0017450
Alexander RM, 2003, PRINCIPLES ANIMAL LO
Brandao M., 2014, P 14 IEEE RAS INT C, P303
Brandao M, 2015, IEEE-RAS INT C HUMAN, P1, DOI 10.1109/HUMANOIDS.2015.7363514
Buckley JG, 2005, GAIT POSTURE, V21, P65, DOI 10.1016/j.gaitpost.2003.12.001
Cappellini G, 2010, J NEUROPHYSIOL, V103, P746, DOI 10.1152/jn.00499.2009
Cham R, 2002, GAIT POSTURE, V15, P159, DOI 10.1016/S0966-6362(01)00150-3
CHAMBERS AJ, 2003, OCCUPATIONAL ERGONOM, V3, P225
Chestnutt J, 2005, IEEE INT CONF ROBOT, P629
Collins SH, 2005, IEEE INT CONF ROBOT, P1983
Dai H., 2014, P IEEE RAS INT C HUM, P295
Damas B, 2013, NEURAL COMPUT, V25, P3044, DOI 10.1162/NECO_a_00510
Deits R., 2014, P 14 IEEE RAS INT C, P279
E C Corporation, 1977, DC MOT SPEED CONTR S
Feng S., 2013, P IEEE RAS INT C HUM, P21
Fong DTP, 2005, MED SCI SPORT EXER, V37, P447, DOI
10.1249/01.MSS.0000155390.41572.DE
Freese M, 2010, LECT NOTES ARTIF INT, V6472, P51, DOI 10.1007/978-3-642-17319-
6_8
Garimort J., 2011, P IEEE INT C ROB AUT, P3982
GIBSON EJ, 1987, J EXP PSYCHOL HUMAN, V13, P533, DOI 10.1037/0096-1523.13.4.533
Hashimoto K, 2015, MECH MACH SCI, V29, P417, DOI 10.1007/978-3-319-14705-5_14
Hauser K, 2008, SPRINGER TRAC ADV RO, V47, P507
Hauser K, 2008, INT J ROBOT RES, V27, P1325, DOI 10.1177/0278364908098447
Heiden TL, 2006, GAIT POSTURE, V24, P237, DOI 10.1016/j.gaitpost.2005.09.004
Hornung A., 2012, P 12 IEEE RAS INT C, P674
Huang WW, 2013, IEEE INT CONF ROBOT, P3124, DOI 10.1109/ICRA.2013.6631011
Johnson S. G., 2014, NLOPT NONLINEAR OPTI
JONES DR, 1993, J OPTIMIZ THEORY APP, V79, P157, DOI 10.1007/BF00941892
Kajita S., 2003, P IEEE INT C ROB AUT, V2, P1620
Kajita S., 2004, P 2004 IEEE RSJ INT, V4, P3546
Kim J., 2013, P 13 IEEE RAS INT C, P300
KRAFT D, 1994, ACM T MATH SOFTWARE, V20, P262, DOI 10.1145/192115.192124
Kuo AD, 2001, J BIOMECH ENG-T ASME, V123, P264, DOI 10.1115/1.1372322
Likhachev M., 2010, SEARCH BASED PLANNIN
Likhachev M., 2003, P 2003 C ADV NEUR IN, P767
Llewellyn MGA, 1992, P 5 INT C ENV ERG MA, P156
Matthis J. S., 2013, P ROY SOC LOND B BIO, V280, P2013
Menant JC, 2009, GAIT POSTURE, V29, P392, DOI 10.1016/j.gaitpost.2008.10.057
Menz HB, 2004, ARCH PHYS MED REHAB, V85, P245, DOI 10.1016/j.apmr.2003.06.015
Menz HB, 2003, GAIT POSTURE, V18, P35, DOI 10.1016/S0966-6362(02)00159-5
MINETTI AE, 1995, J APPL PHYSIOL, V79, P1698
Minetti AE, 1997, J THEOR BIOL, V186, P467, DOI 10.1006/jtbi.1997.0407
Mombaur K, 2010, AUTON ROBOT, V28, P369, DOI 10.1007/s10514-009-9170-7
Nikolic M., 2014, MECH MACHINE SCI, V22, P189
Park JH, 2001, IEEE INT CONF ROBOT, P4134, DOI 10.1109/ROBOT.2001.933264
Perrin N, 2012, IEEE T ROBOT, V28, P427, DOI 10.1109/TRO.2011.2172152
Perry J, 1992, GAIT ANAL NORMAL PAT
Proffitt DR, 1995, PSYCHON B REV, V2, P409, DOI 10.3758/BF03210980
Rusu R. B., 2011, IEEE INT C ROB AUT I
Sasaki K, 2009, J EXP BIOL, V212, P738, DOI 10.1242/jeb.023267
Umberger BR, 2007, J EXP BIOL, V210, P3255, DOI 10.1242/jeb.000950
Wang J. M., 2010, ACM T GRAPHIC, V29
WINTER DA, 1979, J APPL PHYSIOL, V46, P79
Winter DA, 1991, BIOMECHANICS MOTOR C
ZARRUGH MY, 1974, EUR J APPL PHYSIOL O, V33, P293, DOI 10.1007/BF00430237
NR 54
TC 2
Z9 2
U1 1
U2 10
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD AUG
PY 2016
VL 32
IS 4
BP 868
EP 879
DI 10.1109/TRO.2016.2581219
PG 12
WC Robotics
SC Robotics
GA DV2MO
UT WOS:000382754900008
DA 2018-01-22
ER

PT J
AU Saputra, AA
Botzheim, J
Sulistijono, IA
Kubota, N
AF Saputra, Azhar Aulia
Botzheim, Janos
Sulistijono, Indra Adji
Kubota, Naoyuki
TI Biologically Inspired Control System for 3-D Locomotion of a Humanoid
Biped Robot
SO IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS
LA English
DT Article
DE Biped robot locomotion; multiobjective evolutionary algorithm; neural
oscillator; recurrent neural network (RNN)
ID CENTRAL PATTERN GENERATOR; RECURRENT NEURAL-NETWORK; NSGA-II; WALKING;
ENVIRONMENT; MECHANISMS; NEURONS; MOTION; CPG
AB This paper proposes the control system for 3-D locomotion of a humanoid biped
robot based on a biological approach. The muscular system in the human body and the
neural oscillator for generating locomotion signals are adapted in this paper. We
extend the neuro-locomotion system for modeling a multiple neuron system, where
motoric neurons represent the muscular system and sensoric neurons represent the
sensor system inside the human body. The output signals from coupled neurons
representing the angle joint level are controlled by gain neurons that represent
the energy burst for driving the joint in each motor. The direction and the length
of step in robot locomotion can be adjusted by command neurons. In order to form
the locomotion pattern, we apply multiobjective evolutionary computation to solve
the multiobjective problem when optimizing synapse weights between the motoric
neurons. We use recurrent neural network (RNN) for the stabilization system
required for supporting locomotion. RNN generates a dynamic weight synapse value
between the sensoric neuron and the motoric neuron. The effectiveness of our system
is demonstrated in open dynamic engine computer simulation and in a real robot
application that has 12 degrees of freedom (DoFs) in legs and four DoFs in hands.
C1 [Saputra, Azhar Aulia; Botzheim, Janos; Kubota, Naoyuki] Tokyo Metropolitan
Univ, Grad Sch Syst Design, Tokyo 1910065, Japan.
[Sulistijono, Indra Adji] Politeknik Elekt Negeri Surabaya, Grad Sch Engn
Technol, Surabaya 60111, Indonesia.
RP Saputra, AA (reprint author), Tokyo Metropolitan Univ, Grad Sch Syst Design,
Tokyo 1910065, Japan.
EM azhar@tmu.ac.jp; botzheim@tmu.ac.jp; indra@pens.ac.id; kubota@tmu.ac.jp
OI Sulistijono, Indra Adji/0000-0002-4230-901X
FU Ministry of Education, Culture, Sports, Science and Technology Regional
Innovation Strategy Support Program: Greater Tokyo Smart Quality of Life
Technology Development Region
FX This work was supported by the Ministry of Education, Culture, Sports,
Science and Technology Regional Innovation Strategy Support Program:
Greater Tokyo Smart Quality of Life Technology Development Region.
CR Baydin A G, 2012, PALADYN J BEHAV ROBO, V3, P45
Chernova S., 2004, INTELLIGENT ROBOTS S, V3, P2562
Deb K, 2002, IEEE T EVOLUT COMPUT, V6, P182, DOI 10.1109/4235.996017
Tran DT, 2014, ROBOT AUTON SYST, V62, P1497, DOI 10.1016/j.robot.2014.05.011
Duysens J, 1998, GAIT POSTURE, V7, P131, DOI 10.1016/S0966-6362(97)00042-8
Fukuda T, 1997, IEEE INT CONF ROBOT, P217, DOI 10.1109/ROBOT.1997.620041
Fukuda T, 1997, 1997 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS 1-4,
P1710, DOI 10.1109/ICNN.1997.614153
Hong YD, 2014, IEEE T IND ELECTRON, V61, P2346, DOI 10.1109/TIE.2013.2267691
Ijspeert AJ, 2006, ADAPTIVE MOTION OF ANIMALS AND MACHINES, P177, DOI 10.1007/4-
431-31381-8_16
Ijspeert AJ, 2005, NEUROINFORMATICS, V3, P171, DOI 10.1385/NI:3:3:171
Ijspeert AJ, 2008, NEURAL NETWORKS, V21, P642, DOI 10.1016/j.neunet.2008.03.014
Inada H., 2003, INT C INT ROB SYST, V3, P2179
Ishiguro A, 2003, ADAPT BEHAV, V11, P7, DOI 10.1177/10597123030111001
Jensen MT, 2003, IEEE T EVOLUT COMPUT, V7, P503, DOI 10.1109/TEVC.2003.817234
Jeong S, 2014, NEUROCOMPUTING, V129, P67, DOI 10.1016/j.neucom.2013.03.050
Kim JY, 2006, ADV ROBOTICS, V20, P707, DOI 10.1163/156855306777361622
Lakany H.M, 1999, P IEE C MOT AN TRACK, p5/1
Li H, 2009, IEEE T EVOLUT COMPUT, V13, P284, DOI 10.1109/TEVC.2008.925798
Lin C.-C., IEEE SYST J IN PRESS
Liu CJ, 2013, IEEE T SYST MAN CY-S, V43, P1206, DOI 10.1109/TSMC.2012.2235426
Liu CJ, 2011, IEEE T SYST MAN CY B, V41, P867, DOI 10.1109/TSMCB.2010.2097589
Matos V, 2014, ROBOT AUTON SYST, V62, P1669, DOI 10.1016/j.robot.2014.08.010
MATSUOKA K, 1985, BIOL CYBERN, V52, P367, DOI 10.1007/BF00449593
MATSUOKA K, 1987, BIOL CYBERN, V56, P345, DOI 10.1007/BF00319514
Miyakoshi S., 1998, P IEEE RSJ INT C INT, V1, P84
Moravej Z, 2015, ELECTR POW SYST RES, V119, P228, DOI 10.1016/j.epsr.2014.09.010
Nassour J, 2014, BIOL CYBERN, V108, P291, DOI 10.1007/s00422-014-0592-8
Nassour J, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P2616, DOI 10.1109/IROS.2009.5354797
Park CS, 2014, IEEE-ASME T MECH, V19, P1374, DOI 10.1109/TMECH.2013.2281193
Park C.-S., 2010, P IEEE RSJ INT C INT, P160
Reil T, 2002, IEEE T EVOLUT COMPUT, V6, P159, DOI 10.1109/4235.996015
Ren GJ, 2015, INFORM SCIENCES, V294, P666, DOI 10.1016/j.ins.2014.05.001
Rowat PF, 1997, J COMPUT NEUROSCI, V4, P103, DOI 10.1023/A:1008869411135
Roy A, 2008, IEEE T SYST MAN CY A, V38, P1434, DOI 10.1109/TSMCA.2008.2003484
Sakagami Y., 2002, P IEEE RSJ INT C INT, V3, p2478 , DOI DOI
10.1109/IRDS.2002.1041641
SAPUTRA AA, 2014, P IEEE S ROB INT INF, P16
Saputra A. A., 2014, P IND S ROB SOCC COM, P17
Secco EL, 2002, IEEE T SYST MAN CY A, V32, P437, DOI 10.1109/TSMCA.2002.803101
TAGA G, 1991, BIOL CYBERN, V65, P147, DOI 10.1007/BF00198086
THOMPSON RJ, 1970, COMMUN ACM, V13, P739, DOI 10.1145/362814.362823
Wagoner A, 2015, PROCEEDINGS OF THE 2015 6TH INTERNATIONAL CONFERENCE ON
AUTOMATION, ROBOTICS AND APPLICATIONS (ICARA), P411, DOI 10.1109/ICARA.2015.7081183
Yang Z., 2004, P IEEE INT C ROB BIO, P239
Yorita A, 2011, IEEE T AUTON MENT DE, V3, P64, DOI 10.1109/TAMD.2011.2105868
Yoshida Y, 2014, IEEE T SYST MAN CY-S, V44, P692, DOI 10.1109/TSMC.2013.2272612
[Anonymous], 2015, IPHONE 5
[Anonymous], 2015, ARDUINO DUE
[Anonymous], 2015, ROBOTIS DYNAMIXEL AX
[Anonymous], 2015, ROBOTIS COMPREHENSIV
NR 48
TC 6
Z9 6
U1 5
U2 20
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 2168-2216
J9 IEEE T SYST MAN CY-S
JI IEEE Trans. Syst. Man Cybern. -Syst.
PD JUL
PY 2016
VL 46
IS 7
SI SI
BP 898
EP 911
DI 10.1109/TSMC.2015.2497250
PG 14
WC Automation & Control Systems; Computer Science, Cybernetics
SC Automation & Control Systems; Computer Science
GA DR2TR
UT WOS:000379757300004
DA 2018-01-22
ER

PT J
AU Yamamoto, K
AF Yamamoto, Ko
TI Control strategy switching for humanoid robots based on maximal output
admissible set
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Balance control; Biped locomotion; Constrained-system; Humanoid robot
ID WALKING PATTERN GENERATION; BIPED LOCOMOTION; GAIT; SYSTEMS;
CONSTRAINTS; STABILITY; FEEDBACK; MODEL
AB Human-like bipedal walking is a goal of humanoid robotics. It is especially
important to provide a robust falling prevention capability by imitating the human
ability to switch between control strategies in response to disturbances, e.g.,
standing balancing and stepping motion. However, the motion control of a humanoid
robot is challenging because the contact forces are constrained. This paper
proposes a novel framework for control strategy switching based on the maximal
output admissible (MOA) set, which is a set of initial states that satisfy the
constraints. This makes it possible to determine whether the robot might fall down
due to a constraint violation. The MOA set is extended to a trajectory tracking
controller with a time-variant reference and constraint. In this extension, the
motion of the vertical center of gravity is also considered, which has often been
neglected in previous studies. Utilizing the MOA set, an example is shown of the
falling prevention control by switching the standing balance control and trajectory
tracking control to a stepping and hopping motion. Moreover, a method is presented
for applying the MOA set framework to a position-controlled humanoid robot. The
validity of the MOA set framework is verified based on simulations and experimental
results. (C) 2016 Elsevier B.V. All rights reserved.
C1 [Yamamoto, Ko] Univ Tokyo, Bunkyo Ku, 7-3-1 Hongo, Tokyo 1138656, Japan.
RP Yamamoto, K (reprint author), Univ Tokyo, Bunkyo Ku, 7-3-1 Hongo, Tokyo 1138656,
Japan.
EM k.yamamoto@nml.t.u-tokyo.ac.jp
FU JSPS KAKENHI [25820086, 16H05874]
FX This work was supported by JSPS KAKENHI Grant Number 25820086, 16H05874.
CR Abdallah M, 2005, IEEE INT CONF ROBOT, P1996
Alcaraz-Jimenez JJ, 2013, INT J ROBOT RES, V32, P1074, DOI
10.1177/0278364913487566
Atkeson CG, 2007, IEEE-RAS INT C HUMAN, P57, DOI 10.1109/ICHR.2007.4813849
Feng S., 2013, P IEEE RAS INT C HUM, P21
GILBERT EG, 1991, IEEE T AUTOMAT CONTR, V36, P1008, DOI 10.1109/9.83532
Harada K, 2006, INT J HUM ROBOT, V3, P1, DOI 10.1142/S0219843606000643
Henrion D, 2014, IEEE T AUTOMAT CONTR, V59, P297, DOI 10.1109/TAC.2013.2283095
Herdt A, 2010, P IEEE RSJ INT C INT, P190
Herdt A, 2010, ADV ROBOTICS, V24, P719, DOI 10.1163/016918610X493552
Hirata K., 1998, Proceedings of the 1998 IEEE International Conference on
Control Applications (Cat. No.98CH36104), P148, DOI 10.1109/CCA.1998.728314
Hirata K, 1997, IEEE DECIS CONTR P, P1477, DOI 10.1109/CDC.1997.657674
Hyon SH, 2007, IEEE T ROBOT, V23, P884, DOI 10.1109/TRO.2007.904896
Kagami S., 2000, P INT WORKSH ALG FDN
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kajita S, 2001, IEEE INT CONF ROBOT, P2299, DOI 10.1109/ROBOT.2001.932965
Kajita S, 2001, IEEE INT CONF ROBOT, P3376, DOI 10.1109/ROBOT.2001.933139
Kajita S., 2003, P IEEE RSJ INT C INT, V2, P1644
Kaminaga H, 2010, IEEE INT CONF ROBOT, P4204, DOI 10.1109/ROBOT.2010.5509789
KATOH R, 1984, AUTOMATICA, V20, P405, DOI 10.1016/0005-1098(84)90099-2
Kogiso K, 2009, ROBOT AUTON SYST, V57, P289, DOI 10.1016/j.robot.2008.10.015
Koolen T, 2012, INT J ROBOT RES, V31, P1094, DOI 10.1177/0278364912452673
Kovar L., 2002, ACM T GRAPHIC, V21, P1
Majumdar A, 2013, IEEE INT CONF ROBOT, P4054, DOI 10.1109/ICRA.2013.6631149
Mayne DQ, 2000, AUTOMATICA, V36, P789, DOI 10.1016/S0005-1098(99)00214-9
Morimoto J, 2008, IEEE T ROBOT, V24, P185, DOI 10.1109/TRO.2008.915457
Muico U., 2009, ACM T GRAPH, V28
Nagasaka K., 2013, WORKSH IEEE IROS 201
Nishiwaki K, 2012, INT J ROBOT RES, V31, P1251, DOI 10.1177/0278364912455720
Ott C., 2010, P IEEE RAS INT C HUM, P167
Piovan G., 2014, INT J ROBOT RES
Pratt J., 2006, P IEEE RAS INT C HUM, P200, DOI DOI 10.1109/ICHR.2006.321385
Stephens B., 2009, P IEEE RAS INT C HUM, P379
Stephens B. J., 2010, P 2010 IEEE RAS INT, P52
Sugihara T, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2575
Sugihara T., 2007, ROBOTICS AUTONOMOUS, V56, P82
SUGIHARA T, 2007, P IEEE C INT ROB SYS, P444
Sugihara T., 2009, P 2009 IEEE INT C RO, P1966
Sugihara T, 2012, IEEE INT C INT ROBOT, P1892, DOI 10.1109/IROS.2012.6385699
Terada K, 2007, IEEE-RAS INT C HUMAN, P222, DOI 10.1109/ICHR.2007.4813872
Urata J., 2011, P IEEE RAS INT C HUM, P13
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
Wieber P. B., 2006, P IEEE RAS INT C HUM, P137, DOI DOI 10.1109/ICHR.2006.321375
Wieber PB, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P1103, DOI
10.1109/IROS.2008.4651022
Yamamoto K., 2014, P IEEE RAS INT C HUM, P487
YAMAMOTO K, 2009, P 9 IEEE RAS INT C H, P549
Yamamoto K, 2014, IEEE INT C INT ROBOT, P2523, DOI 10.1109/IROS.2014.6942906
Yamamoto K, 2013, IEEE INT C INT ROBOT, P3643, DOI 10.1109/IROS.2013.6696876
Yamamoto K, 2010, IEEE INT CONF ROBOT, P3292, DOI 10.1109/ROBOT.2010.5509824
Yamane K, 2003, IEEE T ROBOTIC AUTOM, V19, P421, DOI 10.1109/TRA.2003.810579
NR 49
TC 3
Z9 3
U1 2
U2 15
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD JUL
PY 2016
VL 81
BP 17
EP 32
DI 10.1016/j.robot.2016.03.010
PG 16
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA DO4EX
UT WOS:000377735700002
DA 2018-01-22
ER

PT J
AU Kishi, T
Shimomura, S
Futaki, H
Yanagino, H
Yahara, M
Cosentino, S
Nozawa, T
Hashimoto, K
Takanishi, A
AF Kishi, T.
Shimomura, S.
Futaki, H.
Yanagino, H.
Yahara, M.
Cosentino, S.
Nozawa, T.
Hashimoto, K.
Takanishi, A.
TI Development of a Humorous Humanoid Robot Capable of Quick-and-Wide Arm
Motion
SO IEEE ROBOTICS AND AUTOMATION LETTERS
LA English
DT Article
DE Human-robot interaction; humanoid robots
AB This letter describes the development of a humanoid arm with quick-and-wide
motion capability for making humans laugh. Laughter is attracting research
attention because it enhances health by treating or preventing mental diseases.
However, laughter has not been used effectively in healthcare because the mechanism
of laughter is complicated and is yet to be fully understood. The development of a
robot capable of making humans laugh will clarify the mechanism how humans
experience humor from stimuli. Nonverbal funny expressions have the potential to
make humans laugh across cultural and linguistic differences. In particular, we
focused on the exaggerated arm motion widely used in slapsticks and silent comedy
films. In order to develop a humanoid robot that can perform this type of movement,
the required specification was calculated from slapstick skits performed by human
comedians. To meet the required specifications, new arms for the humanoid robot
were developed with a novel mechanism that includes lightweight joints driven by a
flexible shaft and joints with high output power driven by a twin-motor mechanism.
The results of experimental evaluation show that the quick-and-wide motion
performed by the developed hardware is effective at making humans laugh.
C1 [Kishi, T.] Waseda Univ, Grad Sch Sci & Engn, Tokyo 1620044, Japan.
[Kishi, T.] Japan Soc Promot Sci, Tokyo 1020083, Japan.
[Shimomura, S.; Futaki, H.; Yanagino, H.; Yahara, M.; Cosentino, S.] Waseda
Univ, Sch Sci & Engn, Tokyo 1620044, Japan.
[Nozawa, T.] Mejiro Univ, Fac Human Studies, Tokyo 1618539, Japan.
[Hashimoto, K.] Waseda Inst Adv Study, Tokyo, Japan.
[Hashimoto, K.; Takanishi, A.] Waseda Univ, HRI, Tokyo 1620044, Japan.
[Takanishi, A.] Waseda Univ, Dept Modern Mech Engn, Tokyo 1628480, Japan.
RP Kishi, T (reprint author), Waseda Univ, Grad Sch Sci & Engn, Tokyo 1620044,
Japan.; Kishi, T (reprint author), Japan Soc Promot Sci, Tokyo 1020083, Japan.
EM contact@takanishi.mech.waseda.ac.jp
FU Research Institute for Science and Engineering, Waseda University;
Future Robotics Organization, Waseda University; humanoid project at the
Humanoid Robotics Institute, Waseda University; JSPS KAKENHI [25220005,
26540137]; JSPS [25.915]; Mitsubishi Foundation; SolidWorks Japan K.K.;
DYDEN Corporation
FX This work was supported in part by the Research Institute for Science
and Engineering, Waseda University; in part by the Future Robotics
Organization, Waseda University, and in part by the humanoid project at
the Humanoid Robotics Institute, Waseda University, in part by the JSPS
KAKENHI (Grant 25220005 and Grant 26540137); JSPS Grant-in-Aid for JSPS
Fellows (Grant 25.915); in part by the Mitsubishi Foundation; SolidWorks
Japan K.K.; and in part by the DYDEN Corporation.
CR Atkinson R., 1993, MR BEANS DIARY
CHAPLIN Charlie, 2012, MY AUTOBIOGRAPHY
Endo Nobutsuna, 2011, J ROBOTICS MECHATRON, V23, P969
Gelkopf M., 2011, EVIDENCE BASED COMPL, V2011
Griffin HJ, 2015, IEEE T AFFECT COMPUT, V6, P165, DOI 10.1109/TAFFC.2015.2390627
Hayashi K, 2008, INT J HUM ROBOT, V5, P67, DOI 10.1142/S0219843608001315
Ito Y, 2014, IEEE INT CONF ROBOT, P3433, DOI 10.1109/ICRA.2014.6907353
Kaneko K, 2011, IEEE INT C INT ROBOT, P4400, DOI 10.1109/IROS.2011.6048074
Kishi T, 2014, IEEE INT CONF ROBOT, P1965, DOI 10.1109/ICRA.2014.6907119
LASKOWSKI K, 2008, MACHINE LEARNING MUL, V5237, P149
Lenarcic J, 2006, ROBOTICA, V24, P105, DOI 10.1017/S0263574705001906
Metta G., 2008, P 8 WORKSH PERF METR, P50, DOI DOI 10.1145/1774674.1774683
Morris D., 1994, BODYTALK WORLD GUIDE
Okada M, 2001, IEEE INT CONF ROBOT, P348, DOI 10.1109/ROBOT.2001.932576
Sakai N., 2006, P 1 IEEE RAS EMBS IN, P982
[Anonymous], 1875, ILLUSTRATIONS UNIVER, P194
Suls J.M., 1972, PSYCHOL HUMOR, P81
Umetani T., 2014, J ROBOTICS MECHATRON, V26, P662
Wiest Gerald, 2009, 2009 2nd Conference on Human System Interactions (HSI 2009),
P367, DOI 10.1109/HSI.2009.5091007
[Anonymous], 2007, PHYSL JOINTS, V1
NR 20
TC 1
Z9 1
U1 0
U2 2
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 2377-3766
J9 IEEE ROBOT AUTOM LET
JI IEEE Robot. Autom. Lett.
PD JUL
PY 2016
VL 1
IS 2
BP 1081
EP 1088
DI 10.1109/LRA.2016.2530871
PG 8
WC Robotics
SC Robotics
GA FK7ZO
UT WOS:000413726900064
DA 2018-01-22
ER

PT J
AU Jovic, J
Escande, A
Ayusawa, K
Yoshida, E
Kheddar, A
Venture, G
AF Jovic, Jovana
Escande, Adrien
Ayusawa, Ko
Yoshida, Eiichi
Kheddar, Abderrahmane
Venture, Gentiane
TI Humanoid and Human Inertia Parameter Identification Using Hierarchical
Optimization
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
ID DYNAMICS; MODELS; MASS
AB We propose a method for estimation of humanoid and human links' inertial
parameters. Our approach formulates the problem as a hierarchical quadratic program
by exploiting the linear properties of rigid body dynamics with respect to the
inertia parameters. In order to assess our algorithm, we conducted experiments with
a humanoid robot and a human subject. We compared ground reaction forces and
moments estimated from force measurements with those computed using identified
inertia parameters and movement information. Our method is able to accurately
reconstruct ground reaction forces and force moments. Moreover, our method is able
to estimate correctly masses of the robots links and to accurately detect
additional masses placed on the human subject during the experiments.
C1 [Jovic, Jovana; Escande, Adrien; Ayusawa, Ko; Yoshida, Eiichi; Kheddar,
Abderrahmane] CNRS AIST Joint Robot Lab, CRT UMI3218, Tsukuba, Ibaraki 3058568,
Japan.
[Kheddar, Abderrahmane] CNRS UM2 LIRMM, Interact Digital Human Grp, UMR 5506, F-
34095 Montpellier, France.
[Venture, Gentiane] Tokyo Univ Agr & Technol, Dept Mech Syst Engn, Tokyo
1830057, Japan.
RP Jovic, J (reprint author), CNRS AIST Joint Robot Lab, CRT UMI3218, Tsukuba,
Ibaraki 3058568, Japan.
EM jovic.jovan.jovana@gmail.com; adrien_escande@yahoo.fr;
k.ayusawa@aist.go.jp; e.yoshida@aist.go.jp; kheddar@gmail.com;
venture@cc.tuat.ac.jp
RI Yoshida, Eiichi/M-3756-2016; Ayusawa, Ko/M-4812-2016; Venture,
Gentiane/E-7060-2013
OI Yoshida, Eiichi/0000-0002-3077-6964; Ayusawa, Ko/0000-0001-8188-4204;
Venture, Gentiane/0000-0001-7767-4765
FU Japan Society for the Promotion of Science (JSPS) [25-03796, 25-03797];
New Energy and Industrial Technology Development Organization (NEDO);
International R&D and Demonstration Project on Robotics Field (USA); R&D
on Disaster Response Robot simulator based on the Choreonoid framework
FX This research is partially supported by the Japan Society for the
Promotion of Science (JSPS) Grant-in-Aid for JSPS Fellows (25-03796 and
25-03797) and the New Energy and Industrial Technology Development
Organization (NEDO) and the International R&D and Demonstration Project
on Robotics Field (USA) and the R&D on Disaster Response Robot simulator
based on the Choreonoid framework.
CR ACKLAND TR, 1988, INT J SPORT BIOMECH, V4, P146, DOI 10.1123/ijsb.4.2.146
Audren H, 2014, IEEE INT C INT ROBOT, P4030, DOI 10.1109/IROS.2014.6943129
Ayusawa K., 2011, P 2011 IEEE INT C RO, P6282
Ayusawa K., 2008, IEEE RAS INT C HUM R
Ayusawa K, 2014, INT J ROBOT RES, V33, P446, DOI 10.1177/0278364913495932
Chandler R.F., 1975, AMRLTR70137 WRIGHT P
Cheng CK, 2000, CLIN BIOMECH, V15, P559, DOI 10.1016/S0268-0033(00)00016-4
Clauser C. E., 1969, AMRLTL6970 WADC WRIG
De Lasa M. I., 2010, P ACM SIGGRAPH
Dempster W. T., 1955, 55159 WADC AER MED R
Diaz-Rodriguez M, 2010, MECH MACH THEORY, V45, P1337, DOI
10.1016/j.mechmachtheory.2010.04.007
Dumas R, 2007, J BIOMECH, V40, P1651, DOI 10.1016/j.jbiomech.2006.07.016
Dumas R, 2007, J BIOMECH, V40, P543, DOI 10.1016/j.jbiomech.2006.02.013
Durkin J.L., 2003, ROUTLEDGE HDB BIOMEC
English C, 2010, INT J STROKE, V5, P395, DOI 10.1111/j.1747-4949.2010.00467.x
Escande A, 2014, INT J ROBOT RES, V33, P1006, DOI 10.1177/0278364914521306
GAUTIER M, 1992, INT J ROBOT RES, V11, P362, DOI 10.1177/027836499201100408
Gautier M., 2013, IEEE INT C ADV INT M
HUANG HK, 1983, J BIOMECH, V16, P821, DOI 10.1016/0021-9290(83)90006-4
Iwasaki T., 2012, IEEE RAS INT C HUM R
Jamisola Jr R. S., 2009, INT C MECH TECHN CEB
Jovic J., 2015, P IEEE RSJ INT C INT, P2762
KANEKO K, 2004, IEEE INT C ROB AUT N
Kanoun P., 2011, IEEE T ROBOT, V27, P785
KHOSLA PK, 1987, ESTIMATION ROBOT DYN
Kinsheel A., 2010, 11 INT C CONTR AUT R
Kozlowski K., 1998, MODELLING IDENTIFICA
MARTIN PE, 1989, J BIOMECH, V22, P367, DOI 10.1016/0021-9290(89)90051-1
MAYEDA H, 1990, IEEE T ROBOTIC AUTOM, V6, P312, DOI 10.1109/70.56663
McConville J. T., 1990, AFAMRLTR80119 WADC W
McConville J. T., 1976, P 6 C INT ERG ASS, P379
MUNGIOLE M, 1990, J BIOMECH, V23, P1039, DOI 10.1016/0021-9290(90)90319-X
Rao G, 2006, J BIOMECH, V39, P1531, DOI 10.1016/j.jbiomech.2005.04.014
Robertson DGE, 2014, RES METHODS BIOMECHA
Rossi M, 2013, J SPORT SCI MED, V12, P761
Rynkiewicz M., 2013, TRENDS SPORT SCI, V1, P47
Skogstad S. A., 2013, P INT C NEW INT MUS, P196
STOUDT HW, 1981, HUM FACTORS, V23, P29
Swevers J, 1997, IEEE T ROBOTIC AUTOM, V13, P730, DOI 10.1109/70.631234
VAUGHAN CL, 1982, J BIOMECH ENG-T ASME, V104, P38, DOI 10.1115/1.3138301
Venture G., 2009, IFAC INT C SYST ID S
Venture G., 2009, IEEE EMBC INT C ENG
Venture G., 2009, IEEE RAS INT C ROB A
Winter DA, 1991, BIOMECHANICS MOTOR C
Yamane K., 2004, SPRINGER TRACTS ADV
Young J. W., 1983, FAAM8316 CIV AER I
ZATSIORSKY V. M, 1985, BIOMECHANICS B, P233
Zatsiorsky V., 1983, BIOMECHANICS PERFORM, P71
Zatsiorsky V., 1983, BIOMECHANICS B, P1152
NR 49
TC 6
Z9 6
U1 1
U2 16
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD JUN
PY 2016
VL 32
IS 3
BP 726
EP 735
DI 10.1109/TRO.2016.2558190
PG 10
WC Robotics
SC Robotics
GA DP5IG
UT WOS:000378528900019
DA 2018-01-22
ER

PT J
AU Cisneros, R
Yokoi, K
Yoshida, E
AF Cisneros, Rafael
Yokoi, Kazuhito
Yoshida, Eiichi
TI Impulsive Pedipulation of a Spherical Object with 3D Goal Position by a
Humanoid Robot: A 3D Targeted Kicking Motion Generator
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Humanoid; impulsive pedipulation; kicking motion; 3D goal position
ID SOCCER-BALL; MANIPULATION
AB This paper describes an algorithm that enables a humanoid robot to perform an
impulsive pedipulation task on a spherical object in the environment; that is, by
using its feet to exert an impulsive force capable of driving the object to a 3D
goal position while achieving certain motion characteristics. This is done by
planning a suitable motion for the legs of the humanoid, capable to exert the
required impact conditions on the spherical object while maintaining the dynamic
stability of the robot. As an example of this algorithm implementation we consider
the free kick in soccer and take it as a case study. Finally, we provide some
simulation and experimental results that intend to show the validity of this
algorithm.
C1 [Cisneros, Rafael] Natl Inst Adv Ind Sci & Technol, Humanoid Res Grp, Tsukuba
Cent 2,1-1-1 Umezono, Tsukuba, Ibaraki 3058568, Japan.
[Yokoi, Kazuhito] Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res Inst,
Dept Informat Technol & Human Factors, Tsukuba Cent 2,1-1-1 Umezono, Tsukuba,
Ibaraki 3058568, Japan.
[Yoshida, Eiichi] Natl Inst Adv Ind Sci & Technol, CNRS AIST JRL Joint Robot
Lab, CRT UMI3218, Tsukuba Cent 2,1-1-1 Umezono, Tsukuba, Ibaraki 3058568, Japan.
RP Cisneros, R (reprint author), Natl Inst Adv Ind Sci & Technol, Humanoid Res Grp,
Tsukuba Cent 2,1-1-1 Umezono, Tsukuba, Ibaraki 3058568, Japan.
EM rafael.cisneros@aist.go.jp; kazuhito.yokoi@aist.go.jp;
e.yoshida@aist.go.jp
RI Cisneros Limon, Rafael/M-5363-2016; Yokoi, Kazuhito/K-2046-2012;
Yoshida, Eiichi/M-3756-2016
OI Cisneros Limon, Rafael/0000-0002-0850-070X; Yokoi,
Kazuhito/0000-0003-3942-2027; Yoshida, Eiichi/0000-0002-3077-6964
CR Barfield WR, 1998, CLIN SPORT MED, V17, P711, DOI 10.1016/S0278-5919(05)70113-7
Botsch M., 2010, POLYGON MESH PROCESS
Brody H., 2005, PHYS TEACH, V43, P503
Choi JY, 2005, IEEE INT CONF ROBOT, P2834
Cisneros R., 2012, IEEE INT C ROB BIOM, P871
Cisneros R., 2013, IEEE RAS INT C HUM R, P154
FONSECA CM, 1993, PROCEEDINGS OF THE FIFTH INTERNATIONAL CONFERENCE ON GENETIC
ALGORITHMS, P416
Gembicki F. W., 1974, THESIS CASE W RESERV
Harper D., 2010, ONLINE ETYMOLOGY DIC
Huang WH, 1998, IEEE INT CONF ROBOT, P1077, DOI 10.1109/ROBOT.1998.677233
Kajita S., 2003, P IEEE RSJ INT C INT, V2, P1644
Kajita Shuuji, 2005, INTRO HUMANOID ROBOT
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
KHATIB O, 1995, INT J ROBOT RES, V14, P19, DOI 10.1177/027836499501400103
Lengagne S, 2013, INT J ROBOT RES, V32, P1104, DOI 10.1177/0278364913478990
Lengagne S, 2010, IEEE INT C INT ROBOT, P698, DOI 10.1109/IROS.2010.5649233
Lynch KM, 1996, INT J ROBOT RES, V15, P533, DOI 10.1177/027836499601500602
Lynch K. M., 1992, IEEE INT C ROB AUT, V3, P2269
Mason M., 1993, P IEEE RSJ IROS, V1, P152, DOI DOI 10.1109/IR0S.1993.583093
Miossec S, 2006, 2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS,
VOLS 1-3, P299, DOI 10.1109/ROBIO.2006.340170
Muller J., 2010, ROBOCUP 2010, P109
Nakaoka S., 2007, IEEE RSJ INT C INT R, P3641
Nakaoka S, 2010, IEEE INT C INT ROBOT, P1675, DOI 10.1109/IROS.2010.5649038
Nishiwaki K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2684, DOI 10.1109/IRDS.2002.1041675
Press W. H, 1992, NUMERICAL RECIPES C
Salomon D., 1999, COMPUTER GRAPHICS GE
SCHEMPF H, 1995, IEEE INT CONF ROBOT, P1314, DOI 10.1109/ROBOT.1995.525462
Stronge W. J, 2000, IMPACT MECH
Sugihara T, 2009, IEEE T ROBOT, V25, P658, DOI 10.1109/TRO.2008.2012336
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
WALKER ID, 1994, IEEE T ROBOTIC AUTOM, V10, P670, DOI 10.1109/70.326571
WANG Y, 1992, J APPL MECH-T ASME, V59, P635, DOI 10.1115/1.2893771
Wenk F., 2013, ROBOCUP 2013, P25
Westervelt E.R., 2007, FEEDBACK CONTROL DYN
White C., 2011, PROJECTILE DYNAMICS
Yi S.-J., 2013, IEEE RAS INT C HUM R
Zhu C, 1996, IROS 96 - PROCEEDINGS OF THE 1996 IEEE/RSJ INTERNATIONAL CONFERENCE
ON INTELLIGENT ROBOTS AND SYSTEMS - ROBOTIC INTELLIGENCE INTERACTING WITH DYNAMIC
WORLDS, VOLS 1-3, P911, DOI 10.1109/IROS.1996.571073
NR 37
TC 0
Z9 0
U1 0
U2 3
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD JUN
PY 2016
VL 13
IS 2
AR 1650003
DI 10.1142/S0219843616500031
PG 43
WC Robotics
SC Robotics
GA DN8CD
UT WOS:000377305900001
DA 2018-01-22
ER

PT J
AU Tripathi, GN
Wagatsuma, H
AF Tripathi, Gyanendra Nath
Wagatsuma, Hiroaki
TI PCA-Based Algorithms to Find Synergies for Humanoid Robot Motion
Behavior
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Robot synergy for motion behavior; degree of freedom minimization; PCA
for synergy; via-point method; humanoid robot
ID MUSCLE SYNERGIES; MOVEMENT; MODEL
AB Applying principal component analysis (PCA) to find synergy signal for specific
motion of Robot is a standard method. However, implementation of PCA gives synergy
solely on quantitative basis. The algorithms proposed in this paper advocates the
enhancement of qualitative measure of PCA to locate well-coordinated synergy
signals. The two main control strategies of central nervous system (CNS) are taken
into account for enhancement of algorithms. First one is the CNS strategy of
separate synergy generation for individual limbs and second is the trajectory
generation of complex movement using via-points. The proposed algorithms find the
synergy without loss of generality of implementation. Humanoid robot NAO is used as
a robotic platform to test the result of the algorithm. The synergy for a group of
motors is calculated by implementing the algorithm on motors position sensor data
of the robot corresponding to three motion pattern 1. Knee bend sitting-standing,
2. Sitting-standing on chair, and 3. Walking. The improvement in result is
statistically measured by calculating error between original and reconstructed
signal for proposed algorithms and applying Z-test tested on error signals. Another
statistical measure of improvement is treated by calculating 'Goodness of Fit' for
original and reconstructed signal.
C1 [Tripathi, Gyanendra Nath; Wagatsuma, Hiroaki] Kyushu Inst Technol, Grad Sch
Life Sci & Syst Engn, Wakamatsu Ku, 2-4 Hibikino, Kitakyushu, Fukuoka 8080196,
Japan.
[Wagatsuma, Hiroaki] RIKEN BSI, 2-1 Hirosawa, Wako, Saitama 3510198, Japan.
RP Tripathi, GN (reprint author), Kyushu Inst Technol, Grad Sch Life Sci & Syst
Engn, Wakamatsu Ku, 2-4 Hibikino, Kitakyushu, Fukuoka 8080196, Japan.
EM tripathi-gyanendra-nath@edu.brain.kyutech.ac.jp;
waga@brain.kyutech.ac.jp
FU Kyushu Institute of Technology; JSPS [26240032]
FX This work was supported in part by Kyushu Institute of Technology under
DBSE Brain-IS Research Project and MSSC promotion for enhancement of the
KIT international presence and partly supported by JSPS 26240032. The
authors would like to thank Dr. Shirai Yoshihito who kindly offered us
his expertise to obtain necessary devices for this experimental
measurement. It is also our opportunity to acknowledge the lab members,
specially Yuya Katsuki and Kenta Shoji for their help to support the
robot, while performing sit-stand experiment.
CR Bernstein N, 1967, COORDINATION REGULAT
Bizzi E, 2008, BRAIN RES REV, V57, P125, DOI 10.1016/j.brainresrev.2007.08.004
BIZZI E, 1992, BEHAV BRAIN SCI, V15, P603, DOI 10.1017/S0140525X00072538
Boylls C. C., 1975, THEORY CEREBELLAR FU, P76
Crocher V, 2012, IEEE T NEUR SYS REH, V20, P247, DOI 10.1109/TNSRE.2012.2190522
Diedrichsen J, 2010, TRENDS COGN SCI, V14, P31, DOI 10.1016/j.tics.2009.11.004
FLASH T, 1985, J NEUROSCI, V5, P1688
Gallistel CR, 2013, ORG ACTION NEW SYNTH
Gioioso G., 2011, IEEE INT C ROB AUT 2
Goodman D., 1985, DIFFERING PERSPECTIV
Ilg W., 2004, INT J HUM ROBOT, V1, P613
Ison M, 2015, IEEE T ROBOT, V31, P259, DOI 10.1109/TRO.2015.2395731
Kaminski TR, 2007, GAIT POSTURE, V26, P256, DOI 10.1016/j.gaitpost.2006.09.006
Katragadda L. K., 1997, LANGUAGE FRAMEWORK R
Kawato M, 1999, CURR OPIN NEUROBIOL, V9, P718, DOI 10.1016/S0959-4388(99)00028-8
Kim S., 2014, INT S EXP ROB ISER M
Kuniyoshi Y., 2005, INT J HUM ROBOT, V4, P415
Kuppuswamy N, 2014, FRONT COMPUT NEUROSC, V8, DOI 10.3389/fncom.2014.00063
LEE WA, 1984, J MOTOR BEHAV, V16, P135
Lim B, 2005, IEEE INT CONF ROBOT, P4630
Mussa-Ivaldi A., 1992, BIOL CYBERN, V67, P491
Ormoneit D, 2005, IMAGE VISION COMPUT, V23, P1264, DOI
10.1016/j.imavis.2005.09.004
Overduin SA, 2008, J NEUROSCI, V28, P880, DOI 10.1523/JNEUROSCI.2869-07.2008
Purves D., 2001, NEUROSCIENCE
Raunhardt D, 2009, VISUAL COMPUT, V25, P509, DOI 10.1007/s00371-009-0336-2
Romero J, 2013, IEEE T ROBOT, V29, P1342, DOI 10.1109/TRO.2013.2272249
ROSENBAUM DA, 2009, HUMAN MOTOR CONTROL
Sentis L., 2005, INT J HUM ROBOT, V2, P505, DOI DOI 10.1142/S0219843605000594
Sung C, 2013, INT J ADV ROBOT SYST, V10, DOI 10.5772/56747
Tilmanne J, 2010, LECT NOTES COMPUT SC, V6459, P363
Torres-Oviedo G, 2006, J NEUROPHYSIOL, V96, P1530, DOI 10.1152/jn.00810.2005
Tresch MC, 2006, J NEUROPHYSIOL, V95, P2199, DOI 10.1152/jn.00222.2005
TURVEY MT, 1991, MAKING THEM MOVE, P157
UNO Y, 1989, BIOL CYBERN, V61, P89
Wada Y., 1994, ADV NEURAL INFORM PR, P727
Wu F., 2014, P ROB SCI SYST
NR 36
TC 0
Z9 0
U1 2
U2 6
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD JUN
PY 2016
VL 13
IS 2
AR 1550037
DI 10.1142/S0219843615500371
PG 21
WC Robotics
SC Robotics
GA DN8CD
UT WOS:000377305900006
DA 2018-01-22
ER

PT J
AU Ayusawa, K
Yoshida, E
Imamura, Y
Tanaka, T
AF Ayusawa, Ko
Yoshida, Eiichi
Imamura, Yumeko
Tanaka, Takayuki
TI New evaluation framework for human-assistive devices based on humanoid
robotics
SO ADVANCED ROBOTICS
LA English
DT Article
DE assistive device evaluation; identification; Humanoid robot; joint
torque estimation
ID MOTION-CAPTURE DATA; INERTIAL PARAMETERS; MANIPULATOR; MODELS;
IDENTIFICATION; LOADS
AB This paper presents the new application of a humanoid robot as an evaluator of
human-assistive devices. The reliable and objective evaluation framework for
assistive devices is necessary for making industrial standards in order that those
devices are used in various applications. In this framework, we utilize a recent
humanoid robot with its high similarity to humans, human motion retargeting
techniques to a humanoid robot, and identification techniques of robot's mechanical
properties. We also show two approaches to estimate supporting torques from the
sensor data, which can be used properly according to the situations. With the
general formulation of the wire-driven multi-body system, the supporting torque of
passive assistive devices is also formulated. We evaluate a passive assistive wear
'Smart Suit Lite (SSL)' as an example of device, and use HRP-4 as the humanoid
platform.
C1 [Ayusawa, Ko; Yoshida, Eiichi; Imamura, Yumeko] Natl Inst Adv Ind Sci & Technol
IS AIST, Intelligent Syst Res Inst, CNRS AIST JRL Joint Robot Lab, RL UMI3218,
Tsukuba, Ibaraki, Japan.
[Tanaka, Takayuki] Hokkaido Univ, Grad Sch Informat Technol, Sapporo, Hokkaido,
Japan.
RP Ayusawa, K (reprint author), Natl Inst Adv Ind Sci & Technol IS AIST,
Intelligent Syst Res Inst, CNRS AIST JRL Joint Robot Lab, RL UMI3218, Tsukuba,
Ibaraki, Japan.
EM k.ayusawa@aist.go.jp
RI Imamura, Yumeko/M-3896-2016; Yoshida, Eiichi/M-3756-2016; Ayusawa,
Ko/M-4812-2016
OI Imamura, Yumeko/0000-0003-3296-4384; Yoshida,
Eiichi/0000-0002-3077-6964; Ayusawa, Ko/0000-0001-8188-4204
FU Project to Promote the Development and Introduction of Robotic Devices
for Nursing Care - METI/AMED; JSPS KAKENHI [25820082]
FX This research was supported by Project to Promote the Development and
Introduction of Robotic Devices for Nursing Care funded by METI/AMED;
JSPS KAKENHI [grant number 25820082].
CR Albu-Schaffer A, 2007, IND ROBOT, V34, P376, DOI 10.1108/01439910710774386
ARMSTRONGHELOUVRY B, 1994, AUTOMATICA, V30, P1083, DOI 10.1016/0005-
1098(94)90209-7
ATKESON CG, 1986, INT J ROBOT RES, V5, P101, DOI 10.1177/027836498600500306
Ayusawa K, 2014, INT J ROBOT RES, V33, P446, DOI 10.1177/0278364913495932
Ayusawa K, 2014, MECH MACH THEORY, V74, P274, DOI
10.1016/j.mechmachtheory.2013.12.015
GAUTIER M, 1992, INT J ROBOT RES, V11, P362, DOI 10.1177/027836499201100408
Imamura Y., 2011, J ROBOTICS MECHATRON, V23, P58
Jamisola RS, 2005, ADV ROBOTICS, V19, P613, DOI 10.1163/156855305323383820
Khalil W., 2002, MODELING IDENTIFICAT
MAYEDA H, 1990, IEEE T ROBOTIC AUTOM, V6, P312, DOI 10.1109/70.56663
Nakamura Y, 2005, IEEE T ROBOT, V21, P58, DOI 10.1109/TRO.2004.833798
Nakaoka S, 2007, INT J ROBOT RES, V26, P829, DOI 10.1177/0278364907079430
Santaguida PL, 2005, CLIN BIOMECH, V20, P906, DOI
10.1016/j.clinbiomech.2005.06.001
Yoshida K, 2000, INT J ROBOT RES, V19, P498, DOI 10.1177/02783640022066996
NR 14
TC 2
Z9 2
U1 0
U2 7
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD APR 17
PY 2016
VL 30
IS 8
BP 519
EP 534
DI 10.1080/01691864.2016.1145596
PG 16
WC Robotics
SC Robotics
GA DI1DQ
UT WOS:000373236600002
DA 2018-01-22
ER

PT J
AU Ma, G
Gao, JY
Yu, ZG
Chen, XC
Huang, Q
Liu, YH
AF Ma, Gan
Gao, Junyao
Yu, Zhangguo
Chen, Xuechao
Huang, Qiang
Liu, Yunhui
TI Development of a Socially Interactive System with Whole-Body Movements
for BHR-4
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Human-robot interaction; Whole-body movement; Social robot; Humanoid
robot
ID ROBOTS; MOTION; COMMUNICATION; EMOTION; CAPTURE; DESIGN; PEOPLE
AB For a long time, humans have been communicating with others through voice,
facial expressions, and body movements. If a humanoid robot has a human-habitual,
natural, and human-like interactive form, it tends to be accepted by humans. To
date, the majority of the existing humanoid robots have had difficulty in
interacting with humans in a human-like way. This study focuses on this issue and
develops a socially interactive system for enhancing the natural communication
ability of a humanoid robot. The system, which is implemented in an android robot,
BHR-4, features hearing, voice conversation, and facial and body emotional
expression capabilities. Then, a full-body social motion planner for a humanoid
robot is presented. The objective of this planner is to control the whole-body
motion of the robot in a way similar to that of humans. Finally, experiments are
conducted on the robot regarding its interactions with humans in a pure indoor
environment. It is expected that the socially interactive system can enhance the
natural communication ability of an android robot. The results of the experiments
show that the combination of verbal behavior with facial expressions and body
movements is better than verbal behavior alone, verbal behavior combined with
facial expressions, or verbal behavior combined with body movements.
C1 [Ma, Gan; Gao, Junyao; Yu, Zhangguo; Chen, Xuechao; Huang, Qiang; Liu, Yunhui]
Beijing Inst Technol, Sch Mechatron Engn, Intelligent Robot Inst, Beijing 100081,
Peoples R China.
[Ma, Gan] Waseda Univ, Humanoid Robot Inst, Tokyo, Japan.
[Liu, Yunhui] Chinese Univ Hong Kong, Dept Mech & Automat Engn, Hong Kong, Hong
Kong, Peoples R China.
RP Gao, JY (reprint author), Beijing Inst Technol, Sch Mechatron Engn, Intelligent
Robot Inst, Beijing 100081, Peoples R China.
EM gaojunyao@bit.edu.cn
RI Liu, Yun-Hui/B-7441-2008
OI YU, Zhangguo/0000-0003-0041-8100
FU National Natural Science Foundation of China [61320106012, 61375103,
61533004, 61273348, 61175077, 61321002]; 863 Program of China
[2014AA041602, 2015AA042305, 2015AA043202]; Key Technologies Research
and Development Program [2015BAF1-3B01, 2015BAK35B01]; Beijing Natural
Science Foundation [4154084]; "111" Project [B08043]
FX This work was supported in part by the National Natural Science
Foundation of China under grants 61320106012, 61375103, 61533004,
61273348, 61175077, and 61321002, in part by the 863 Program of China
under Grant 2014AA041602, Grant 2015AA042305, and Grant 2015AA043202, in
part by the Key Technologies Research and Development Program under
Grant 2015BAF1-3B01 and Grant 2015BAK35B01, in part by the Beijing
Natural Science Foundation under Grant 4154084, and in part by the "111"
Project under Grant B08043.
CR Ahn H.S., 2014, ADV HUMAN COMPUTER I, V2014, P1, DOI DOI 10.1155/2014/630808
Banham KM, 1968, SOC COMPETENCE INVEN
Bartneck C, 2004, P DESIGN EMOTION, P32
Becker-Asano C., 2011, J ARTIF INTELL SOFT, V1, P215
Breazeal C, 2003, INT J HUM-COMPUT ST, V59, P119, DOI 10.1016/S1071-
5819(03)00018-1
Breemen AV, 2004, WORKSH SHAP HUM ROB
Burgard W., 1998, P NAT C ART INT, P11
Cosentino S, 2013, 2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
BIOMIMETICS (ROBIO), P1396, DOI 10.1109/ROBIO.2013.6739661
de Ruyter B, 2005, INTERACT COMPUT, V17, P522, DOI 10.1016/j.intcom.2005.03.003
Destephe M, 2013, 2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS
(ROBIO), P1276, DOI 10.1109/ROBIO.2013.6739640
Ekman P., 1978, FACIAL ACTION CODING
Ekman P., 1997, WHAT FACE REVEALS BA
Fukuda T, 2001, ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, P14, DOI
10.1109/ROMAN.2001.981870
Gao J, 2011, 18 CISM IFTOMM S ROB, P433
Ge SS, 2011, MACH VISION APPL, V24, P103
Gockley R, 2006, INTERACTIONS MOODY R
Gough Harrison, 1968, CHAPIN SOCIAL INSIGH
Guerra-Filho G, 2012, IMAGE VISION COMPUT, V30, P251, DOI
10.1016/j.imavis.2011.12.002
Hara F, 2001, ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, P504, DOI
10.1109/ROMAN.2001.981954
Hashimoto T, 2011, INT J ADV ROBOT SYST, V8, P51
He HS, 2011, INT J SOC ROBOT, V3, P457, DOI 10.1007/s12369-011-0105-z
He HS, 2014, AUTON ROBOT, V36, P225, DOI 10.1007/s10514-013-9346-z
Heerink M, 2010, INT J SOC ROBOT, V2, P361, DOI 10.1007/s12369-010-0068-5
Huang QA, 2010, ROBOTICA, V28, P737, DOI 10.1017/S0263574709990439
Kanda T, 2004, HUM-COMPUT INTERACT, V19, P61, DOI 10.1207/s15327051hci1901&2_4
Kanda T., 2009, P 4 ACM IEEE INT C H, P173
Kanda T, 2010, IEEE T ROBOT, V26, P897, DOI 10.1109/TRO.2010.2062550
Kaneko K, 2011, IEEE INT C INT ROBOT, P4392, DOI 10.1109/IROS.2011.6048061
Kozima H., 2005, IEEE INT WORKSH ROB, P341
Kuehne H, 2013, HIGH PERFORMANCE COMPUTING IN SCIENCE AND ENGINEERING '12:
TRANSACTIONS OF THE HIGH PERFORMANCE COMPUTING CENTER, STUTTGART (HLRS) 2012, P571,
DOI 10.1007/978-3-642-33374-3_41
Lin CC, 2009, 2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS
(ROBIO 2009), VOLS 1-4, P492, DOI 10.1109/ROBIO.2009.5420696
Ma G, 2014, P ROMANSY 2014 20 CI, P501
Ma YL, 2006, BEHAV RES METHODS, V38, P134, DOI 10.3758/BF03192758
Mandery C., 2015, IEEE INT C ROB AUT, P329
Mark L, 2009, HALL NONVERBAL COMMU
MEHRABIAN A, 1968, PSYCHOL TODAY, V2, P53
Moos FA, 1927, SOCIAL INTELLIGENCE
Muller M, 2007, U BONN COMPUTER GRAP
Nishio S, 2007, GEMINOID TELEOPERATE, P343
Oh J.-H., 2006, P IEEE RSJ INT C INT, P1428, DOI DOI 10.1109/IROS.2006.281935
Sakamoto D, 2005, INT J HUM-COMPUT ST, V62, P247, DOI
10.1016/j.ijhcs.2004.11.001
Sakamoto D., 2007, P ACM IEEE INT C HUM, P193, DOI DOI 10.1145/1228716.1228743
Shiomi M., 2006, P 1 ACM SIGCHI SIGAR, P305, DOI DOI 10.1145/1121241.1121293
Tanaka F, 2007, P NATL ACAD SCI USA, V104, P17954, DOI 10.1073/pnas.0707769104
Tasaki T, 2014, ADV ROBOTICS, V28, P39, DOI 10.1080/01691864.2013.854457
Thrun S, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P1999, DOI 10.1109/ROBOT.1999.770401
Vlachos E, 2015, PROC 9 KES INT C SOR, P109
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Wada K, 2004, P IEEE, V92, P1780, DOI 10.1109/JPROC.2004.835378
Yu ZG, 2014, ADV ROBOTICS, V28, P379, DOI 10.1080/01691864.2013.867290
Zecca M., 2009, IEEE INT S ROB HUM I, P381, DOI DOI 10.1109/ROMAN.2009.5326184
NR 51
TC 0
Z9 0
U1 0
U2 5
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD APR
PY 2016
VL 8
IS 2
BP 183
EP 192
DI 10.1007/s12369-015-0330-y
PG 10
WC Robotics
SC Robotics
GA DJ7KO
UT WOS:000374390500002
DA 2018-01-22
ER

PT J
AU Sabelli, AM
Kanda, T
AF Sabelli, Alessandra Maria
Kanda, Takayuki
TI Robovie as a Mascot: A Qualitative Study for Long-Term Presence of
Robots in a Shopping Mall
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Communication robots; Social robots; Robots in public place; Qualitative
study
AB This paper, which reports a qualitative study on a social robot in a local
shopping mall in Japan, explores how visitors interacted, understood, and accepted
it. In the shopping mall where we conducted our study, Robovie, a humanoid robot,
has been tested for 3 years. Based on this context of long-term exposure to a
social robot, we conducted short-term interviews and observations with the visitors
to the mall. We analyzed the obtained qualitative data by a grounded-theory
approach and identified four common trends: (1) association of the robot with its
location; (2) assigning of future roles to the robot; (3) perceiving it as a form
of entertainment for children, i.e., as a mascot; and (4) perception of autonomy is
independent of how the robot works. One might expect people to automatically see
the robot as a utility, but instead they tended to consider it a suitable mascot.
C1 [Sabelli, Alessandra Maria] Univ Chicago, Dept Anthropol, 1126 East 59th St,
Chicago, IL USA.
[Kanda, Takayuki] Adv Telecommun Res Inst Int IRC, 2-2-2 Hikaridai, Keihanna Sci
City, Kyoto, Japan.
RP Kanda, T (reprint author), Adv Telecommun Res Inst Int IRC, 2-2-2 Hikaridai,
Keihanna Sci City, Kyoto, Japan.
EM aless@uchicago.edu; kanda@atr.jp
RI Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825
FU JST CREST
FX We thank Dr. Koizumi, Mr. Hato, and Ms. Taniguchi for their help. This
research was supported by JST CREST.
CR Bartneck C, 2009, INT J SOC ROBOT, V1, P71, DOI 10.1007/s12369-008-0001-3
Burgard W., 1998, NAT C ART INT, P11
Ferri G, 2011, IEEE INT C ROB AUT I, P65
Forlizzi J, 2007, ACM IEEE INT C HUM R, P129
Forlizzi J, 2006, ACM SIGCHI SIGART HU, P258
Gibson B., 2013, REDISCOVERING GROUND
Glas DF, 2012, J HUM-ROBOT INTERACT, V1, P5, DOI 10.5898/JHRI.1.2.Glas
Glaser B.G., 1967, DISCOVERY GROUNDED T
Gockley R., 2005, IEEE RSJ INT C INT R, P1338, DOI DOI 10.1109/IROS.2005.1545303
Gross H. M., 2008, IEEE INT C SYST MAN, P3471
Heerink M, 2008, ACM IEEE INT C HUM R, P113
Heerink M, 2010, INT J SOC ROBOT, V2, P361, DOI 10.1007/s12369-010-0068-5
Huber A, 2014, J HUM-ROBOT INTERACT, V3, P100, DOI 10.5898/JHRI.3.2.Huber
Iwamura Y, 2011, ACMIEEE INT CONF HUM, P449, DOI 10.1145/1957656.1957816
Kahn PH, 2012, DEV PSYCHOL, V48, P303, DOI 10.1037/a0027033
Kamide H, 2012, P 7 ANN ACM IEEE INT, P49
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Lee M.K., 2012, P CHI 2012, V2012, P695
Mazzolai B, 2008, 5 INT C UB ROB AMB I
Morales Y, 2014, J HUM ROBOT INTERACT, V3, P51
Mutlu B, 2008, ACM IEEE INT C HUM R, P287
Nomura T, 2006, INTERACT STUD, V7, P437, DOI 10.1075/is.7.3.14nom
Pineau J, 2003, ROBOT AUTON SYST, V42, P271, DOI 10.1016/S0921-8890(02)00381-0
Sabelli AM, 2011, ACMIEEE INT CONF HUM, P37, DOI 10.1145/1957656.1957669
Shiomi M, 2007, IEEE INTELL SYST, V22, P25, DOI 10.1109/MIS.2007.37
Siegwart R, 2003, ROBOT AUTON SYST, V42, P203, DOI 10.1016/S0921-8890(02)00376-7
Steinfeld A, 2006, ACM SIGCHI SIGART C, P33, DOI DOI 10.1145/1121241.1121249
Sung J, 2010, INT J SOC ROBOT, V2, P417, DOI 10.1007/s12369-010-0065-8
Thrun S, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P1999, DOI 10.1109/ROBOT.1999.770401
Tolhurst E, 2012, FORUM QUAL SOZIALFOR, V13
Weiss A, 2010, ACMIEEE INT CONF HUM, P23, DOI 10.1109/HRI.2010.5453273
Yanco HA, 2015, J FIELD ROBOT, V32, P420, DOI 10.1002/rob.21568
Zheng KH, 2011, ACMIEEE INT CONF HUM, P379
NR 33
TC 2
Z9 2
U1 5
U2 11
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD APR
PY 2016
VL 8
IS 2
BP 211
EP 221
DI 10.1007/s12369-015-0332-9
PG 11
WC Robotics
SC Robotics
GA DJ7KO
UT WOS:000374390500004
DA 2018-01-22
ER

PT J
AU Takano, W
Nakamura, Y
AF Takano, Wataru
Nakamura, Yoshihiko
TI Incremental statistical learning for integrating motion primitives and
language in humanoid robots
SO AUTONOMOUS ROBOTS
LA English
DT Article
DE Motion primitive; Humanoid robot; Natural language
ID IMITATION; SYMBOLS; MODEL
AB We describe a novel approach that allows humanoid robots to incrementally
integrate motion primitives and language expressions, when there are underlying
natural language and motion language modules. The natural language module
represents sentence structure using word bigrams. The motion language module
extracts the relations between motion primitives and the relevant words. Both the
natural language module and the motion language module are expressed as
probabilistic models and, therefore, they can be integrated so that the robots can
both interpret observed motion in the form of sentences and generate the motion
corresponding to a sentence command. Incremental learning is needed for a robot
that develops these linguistic skills autonomously . The algorithm is derived from
optimization of the natural language and motion language modules under constraints
on their probabilistic variables such that the association between motion primitive
and sentence in incrementally added training pairs is strengthened. A test based on
interpreting observed motion in the forms of sentence demonstrates the validity of
the incremental statistical learning algorithm.
C1 [Takano, Wataru; Nakamura, Yoshihiko] Bunkyoku Hongo, Tokyo, Japan.
RP Takano, W (reprint author), Bunkyoku Hongo, Tokyo, Japan.
EM takano@ynl.t.u-tokyo.ac.jp; nakamura@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102
FU Japan Society for Promotion of Science [26700021]; Strategic Information
and Communications R&D Promotion Program of Ministry of Internal Affairs
and Communications [142103011]
FX This research was partially supported by a Grant-in-Aid for Young
Scientists (A) (26700021) from the Japan Society for the Promotion of
Science, and the Strategic Information and Communications R&D Promotion
Program (142103011) of the Ministry of Internal Affairs and
Communications.
CR Argall BD, 2009, ROBOT AUTON SYST, V57, P469, DOI 10.1016/j.robot.2008.10.024
Billard AG, 2006, ROBOT AUTON SYST, V54, P370, DOI 10.1016/j.robot.2006.01.007
Breazeal C, 2002, TRENDS COGN SCI, V6, P481, DOI 10.1016/S1364-6613(02)02016-8
Deacon T. W., 1997, SYMBOLIC SPECIES COE
Gallese V, 1998, TRENDS COGN SCI, V2, P493, DOI 10.1016/S1364-6613(98)01262-5
Haruno M, 2001, NEURAL COMPUT, V13, P2201, DOI 10.1162/089976601750541778
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Kadone H., 2005, P 2005 IEEE RSJ INT, P2900
Ogata T., 2007, P IEEE RSJ INT C INT
Ogata T, 2007, IEEE INT CONF ROBOT, P2156, DOI 10.1109/ROBOT.2007.363640
Peirce C. S., 1931, COLLECTED PAPERS CS
Rabiner L., 1993, PRENTICE HALL SIGNAL
Rizzolatti G, 2001, NAT REV NEUROSCI, V2, P661, DOI 10.1038/35090060
Saussure Ferdinand, 1966, COURSE GEN LINGUISTI
Sugita Y, 2005, ADAPT BEHAV, V13, P33, DOI 10.1177/105971230501300102
Takano W, 2015, INT J ROBOT RES, V34, P1314, DOI 10.1177/0278364915587923
Takano W, 2015, ADV ROBOTICS, V29, P115, DOI 10.1080/01691864.2014.985611
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
Taylor G. W., 2006, ADV NEURAL INFORM PR, V2006, P1345
NR 19
TC 0
Z9 0
U1 0
U2 4
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 0929-5593
EI 1573-7527
J9 AUTON ROBOT
JI Auton. Robot.
PD APR
PY 2016
VL 40
IS 4
BP 657
EP 667
DI 10.1007/s10514-015-9486-4
PG 11
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA DF3JY
UT WOS:000371241300005
DA 2018-01-22
ER

PT J
AU Kobayashi, T
Sekiyama, K
Aoyama, T
Hasegawa, Y
Fukuda, T
AF Kobayashi, Taisuke
Sekiyama, Kosuke
Aoyama, Tadayoshi
Hasegawa, Yasuhisa
Fukuda, Toshio
TI Selection of two arm-swing strategies for bipedal walking to enhance
both stability and efficiency
SO ADVANCED ROBOTICS
LA English
DT Article
DE Humanoid robots; falling risk; arm swing; bipedal walking; selection of
locomotion
ID LOCOMOTION
AB Walking ability, which involves mainly stability and efficiency, is one of the
most important issues in humanoid robots. Effective use of a robot's arms is
expected to improve its walking ability under its body constraints. Although
several types of arm-swing strategies have been proposed, they are difficult to
execute simultaneously. We propose two-stage integration of these strategies to
enhance both stability and efficiency. A selection algorithm for locomotion (SAL)
selects the appropriate strategy according to the demands of the situation. In the
first stage, two strategies are evaluated. One of them, Ro-SAL, entails use and
compensation of the moment of the swing leg by hip rotation and arm swing. The
other strategy, Su-SAL, entails the support of center of gravity trajectory
tracking based on a predictive control. Ro-SAL is effective for the stable state
and states with internal model error, whereas Su-SAL is effective in states with
external force and environmental complexity. In the second stage of the proposed
method (AS-SAL), the robot recognizes the current state and selects the optimal
combination of the two arm-swing strategies. As a result, a humanoid robot can
exhibit more efficient, stable bipedal walking without falling.
C1 [Kobayashi, Taisuke; Sekiyama, Kosuke; Hasegawa, Yasuhisa] Nagoya Univ, Dept
Micronano Syst Engn, 1 Furo Cho, Nagoya, Aichi 4648603, Japan.
[Aoyama, Tadayoshi] Hiroshima Univ, Dept Syst Cybernet, Hiroshima, Japan.
[Fukuda, Toshio] Meijo Univ, Fac Sci & Engn, Nagoya, Aichi, Japan.
[Fukuda, Toshio] Nagoya Univ, Inst Adv Res, Nagoya, Aichi 4648601, Japan.
[Fukuda, Toshio] Beijing Inst Technol, Intelligent Robot Inst, Beijing, Peoples
R China.
RP Kobayashi, T (reprint author), Nagoya Univ, Dept Micronano Syst Engn, 1 Furo
Cho, Nagoya, Aichi 4648603, Japan.
EM kobayashi@robo.mein.nagoya-u.ac.jp
CR Aoustin Y, 2014, MULTIBODY SYST DYN, V32, P55, DOI 10.1007/s11044-013-9378-3
Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Fukuda T., 2012, MULTILOCOMOTION ROBO
Kajita S, 2003, ADV ROBOTICS, V17, P131, DOI 10.1163/156855303321165097
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
Kobayashi T, 2015, IEEE T ROBOT, V31, P750, DOI 10.1109/TRO.2015.2426451
Kolter JZ, 2011, INT J ROBOT RES, V30, P150, DOI 10.1177/0278364910390537
Pratt JE, 2006, LECT NOTES CONTR INF, V340, P299
Tlalolini D, 2011, IEEE-ASME T MECH, V16, P310, DOI 10.1109/TMECH.2010.2042458
Ugurlu B, 2012, INT J HUM ROBOT, V9, DOI 10.1142/S0219843612500338
Zhang S, 2013, INT J ADV ROB SYST, V10
NR 11
TC 1
Z9 1
U1 3
U2 11
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD MAR 18
PY 2016
VL 30
IS 6
BP 386
EP 401
DI 10.1080/01691864.2015.1135079
PG 16
WC Robotics
SC Robotics
GA DH5PA
UT WOS:000372841900003
DA 2018-01-22
ER

PT J
AU Sugimoto, N
Tangkaratt, V
Wensveen, T
Zhao, TT
Sugiyama, M
Morimoto, J
AF Sugimoto, Norikazu
Tangkaratt, Voot
Wensveen, Thijs
Zhao, Tingting
Sugiyama, Masashi
Morimoto, Jun
TI Trial and Error Using Previous Experiences as Simulation Models in
Humanoid Motor Learning
SO IEEE ROBOTICS & AUTOMATION MAGAZINE
LA English
DT Article
ID PARAMETER-BASED EXPLORATION; POLICY GRADIENT; SAMPLE REUSE; ROBOTICS
C1 [Sugimoto, Norikazu] Natl Inst Informat & Commun Technol, Osaka, Japan.
[Tangkaratt, Voot; Sugiyama, Masashi] Univ Tokyo, Tokyo 1138654, Japan.
[Wensveen, Thijs] Delft Univ Technol, NL-2600 AA Delft, Netherlands.
[Zhao, Tingting] Tianjin Univ Sci & Technol, Tianjin, Peoples R China.
[Morimoto, Jun] ATR Computat Neurosci Labs, Kyoto, Japan.
RP Sugimoto, N (reprint author), Natl Inst Informat & Commun Technol, Osaka, Japan.
EM xsugi@nict.go.jp; voot@ms.k.u-tokyo.ac.jp; thijswensveen@gmail.com;
tingting@tust.edu.cn; sugi@k.u-tokyo.ac.jp; xmorimo@atr.jp
FU MEXT KAKENHI Grant [23120004]; JSPS KAKENHI Grant [26730141]; NSFC
[61502339]
FX This work was supported by MEXT KAKENHI Grant 23120004, MIC-SCOPE,
"Development of BMI Technologies for Clinical Application'' carried out
under SRPBS by AMED, and NEDO. Part of this study was supported by JSPS
KAKENHI Grant 26730141. This work was also supported by NSFC 61502339.
CR Atkeson C. G., 2002, P NEUR INF PROC SYST, P1643
Atkeson C. G., 1997, P INT C MACH LEARN, V14, P12
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Deisenroth M. P., 2011, P 28 INT C MACH LEAR, P465
Deisenroth MP, 2015, IEEE T PATTERN ANAL, V37, P408, DOI 10.1109/TPAMI.2013.218
Fishman G. S., 1996, MONTE CARLO CONCEPTS
Greensmith E, 2004, J MACH LEARN RES, V5, P1471
Hachiya H, 2011, NEURAL COMPUT, V23, P2798, DOI 10.1162/NECO_a_00199
Kakade S, 2002, ADV NEUR IN, V14, P1531
Kupcsik A., 2013, P NAT C ART INT
Matsubara T, 2006, ROBOT AUTON SYST, V54, P911, DOI 10.1016/j.robot.2006.05.012
Moody J, 1989, NEURAL COMPUT, V1, P281, DOI 10.1162/neco.1989.1.2.281
Morimoto J, 2009, AUTON ROBOT, V27, P131, DOI 10.1007/s10514-009-9133-z
Pan SJ, 2010, IEEE T KNOWL DATA EN, V22, P1345, DOI 10.1109/TKDE.2009.191
Peters J., 2007, P 24 INT C MACH LEAR, P745
Peters J., 2006, P IEEE RSJ INT C INT, P2219
Peters J, 2008, NEUROCOMPUTING, V71, P1180, DOI 10.1016/j.neucom.2007.11.026
Peters J, 2010, PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL
INTELLIGENCE (AAAI-10), P1607
Rasmussen CE, 2005, ADAPT COMPUT MACH LE, P1
Schaal S, 1998, NEURAL COMPUT, V10, P2047, DOI 10.1162/089976698300016963
Schaal S., 2009, TECH REP
Schaal S, 2010, IEEE ROBOT AUTOM MAG, V17, P20, DOI 10.1109/MRA.2010.936957
Sehnke F, 2010, NEURAL NETWORKS, V23, P551, DOI 10.1016/j.neunet.2009.12.004
Sugimoto N., 2011, P IEEE RAS INT C HUM, P255
Sugimoto N., 2013, P IEEE RAS INT C HUM, P429
Sugimoto N., 2014, P IEEE RAS INT C HUM, P554
Sugimoto N, 2012, NEURAL NETWORKS, V29-30, P8, DOI 10.1016/j.neunet.2012.01.002
Sugimoto N, 2012, NEURAL COMPUT, V24, P577, DOI 10.1162/NECO_a_00246
Sutton RS, 1998, REINFORCEMENT LEARNI
Sutton R. S., 1984, THESIS
Tangkaratt V, 2014, NEURAL NETWORKS, V57, P128, DOI 10.1016/j.neunet.2014.06.006
Weaver L., 2001, P 17 C UNC ART INT, P538
WILLIAMS RJ, 1992, MACH LEARN, V8, P229, DOI 10.1007/BF00992696
WILLIAMS RJ, 1988, NUCCS883
Zhao TT, 2013, NEURAL COMPUT, V25, P1512, DOI 10.1162/NECO_a_00452
Zhao TT, 2012, NEURAL NETWORKS, V26, P118, DOI 10.1016/j.neunet.2011.09.005
NR 36
TC 1
Z9 1
U1 1
U2 1
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1070-9932
EI 1558-223X
J9 IEEE ROBOT AUTOM MAG
JI IEEE Robot. Autom. Mag.
PD MAR
PY 2016
VL 23
IS 1
BP 96
EP 105
DI 10.1109/MRA.2015.2511681
PG 10
WC Automation & Control Systems; Robotics
SC Automation & Control Systems; Robotics
GA DX3TK
UT WOS:000384296500015
DA 2018-01-22
ER

PT J
AU Moro, FL
Gienger, M
Goswami, A
Khatib, O
Yoshida, E
AF Moro, Federico L.
Gienger, Michael
Goswami, Ambarish
Khatib, Oussama
Yoshida, Eiichi
TI Guest Editorial of the Special Issue on Whole-Body Control for Robots in
the Real World
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Editorial Material
DE Whole-body control; floating-base robot dynamics; real world scenarios
AB Research in whole-body control (WBC) aims to contribute to provide robots with
those capabilities that are necessary to move and perform in real world scenarios.
Until recent years, limitations on hardware relegated WBC to almost purely
theoretical research. Recently, a growing number of experimental platforms have
become available (in particular, torque-controlled humanoids). This new opportunity
has triggered the deployment on real robots of the theoretical outcomes of research
in the field. This is backed up by a number of new research projects and
initiatives addressing issues in this domain, including the Darpa robotic challenge
(DRC). The goal of this special issue is to provide a clear representation of what
is the state-of-the-art in WBC, and to help identifying what steps still need to be
taken to have humanoid robots moving out of research laboratories to real world
applications.
C1 [Moro, Federico L.] CNR, ITIA, Inst Ind Technol & Automat, Via Bassini 15, I-
20133 Milan, Italy.
[Moro, Federico L.] Ist Italiano Tecnol, Dept Adv Robot, Via Morego 30, I-16163
Genoa, Italy.
[Gienger, Michael] EU, Honda Res Inst, Carl Legien Str 30, D-63073 Offenbach,
Germany.
[Goswami, Ambarish] Honda Res Inst US, 375 Ravendale Dr,Suite B, Mountain View,
CA 94043 USA.
[Khatib, Oussama] Stanford Univ, Dept Comp Sci, 353 Serra Mall, Stanford, CA
94305 USA.
[Yoshida, Eiichi] Natl Inst Adv Ind Sci & Technol, CNRS AIST JRL Joint Robot
Lab, Tsukuba Cent 2,1-1-1 Umezono, Tsukuba, Ibaraki 3058568, Japan.
RP Moro, FL (reprint author), CNR, ITIA, Inst Ind Technol & Automat, Via Bassini
15, I-20133 Milan, Italy.; Moro, FL (reprint author), Ist Italiano Tecnol, Dept Adv
Robot, Via Morego 30, I-16163 Genoa, Italy.
EM federico.moro@itia.cnr.it
OI MORO, FEDERICO LORENZO/0000-0003-1841-7130
CR Fok CL, 2016, INT J HUM ROBOT, V13, DOI 10.1142/S0219843615500401
Hopkins MA, 2016, INT J HUM ROBOT, V13, DOI 10.1142/S0219843615500346
Koolen T, 2016, INT J HUM ROBOT, V13, DOI 10.1142/S0219843616500079
Moro FL, 2016, INT J HUM ROBOT, V13, DOI 10.1142/S0219843615500462
Prete A. D., 2016, INT J HUM ROBOT, V13, DOI DOI 10.1142/S0219843615500449
Shiguematzu Y. M., 2016, HEEL CONTACT TOE OFF, V13, DOI
[10.1142/S021984361650002X., DOI 10.1142/S021984361650002X]
Wang YQ, 2016, INT J HUM ROBOT, V13, DOI 10.1142/S0219843615500474
Wensing PM, 2016, INT J HUM ROBOT, V13, DOI 10.1142/S0219843615500395
Yi SJ, 2016, INT J HUM ROBOT, V13, DOI 10.1142/S0219843616500110
NR 9
TC 0
Z9 0
U1 0
U2 2
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2016
VL 13
IS 1
SI SI
AR 1602001
DI 10.1142/S0219843616020011
PG 4
WC Robotics
SC Robotics
GA DJ2CR
UT WOS:000374011800007
OA gold
DA 2018-01-22
ER

PT J
AU Shiguematsu, YM
Kryczka, P
Hashimoto, K
Lim, HO
Takanishi, A
AF Shiguematsu, Yukitoshi Minami
Kryczka, Przemyslaw
Hashimoto, Kenji
Lim, Hun-Ok
Takanishi, Atsuo
TI Heel-Contact Toe-Off Walking Pattern Generator Based on the Linear
Inverted Pendulum
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Heel-contact toe-off motion; LIP model; gait pattern generation;
WABIAN-2R
ID BIPED ROBOT
AB We propose a novel heel-contact toe-off walking pattern generator for a biped
humanoid robot. It is divided in two stages: a simple model stage where a Linear
Inverted Pendulum (LIP) based heel-contact toe-off walking model based on the so-
called functional rockers of the foot (heel, ankle and forefoot rockers) is used to
calculate step positions and timings, and the Center of Mass (CoM) trajectory
taking step lengths as inputs, and a multibody dynamics model stage, where the
final pattern to implement on the humanoid robot is obtained from the output of the
first simple model stage. The final pattern comprises the Zero Moment Point (ZMP)
reference, the joint angle references and the end effector references. The
generated patterns were implemented on our robotic platform, WABIAN-2R to evaluate
the generated walking patterns.
C1 [Shiguematsu, Yukitoshi Minami] Waseda Univ, Grad Sch Sci & Engn, Shinjuku Ku,
41-304,17 Kikui Cho, Tokyo 1620044, Japan.
[Kryczka, Przemyslaw] Inst Italiano Technol, Dept Adv Robot, Via Morego 30, I-
16163 Genoa, Italy.
[Hashimoto, Kenji] Waseda Univ, Waseda Inst Adv Study, Tokyo, Japan.
[Lim, Hun-Ok] Kanagawa Univ, Fac Engn, Tokyo, Japan.
[Lim, Hun-Ok; Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst, Tokyo, Japan.
[Takanishi, Atsuo] Waseda Univ, Dept Modern Mech Engn, Tokyo, Japan.
RP Shiguematsu, YM (reprint author), Waseda Univ, Grad Sch Sci & Engn, Shinjuku Ku,
41-304,17 Kikui Cho, Tokyo 1620044, Japan.
EM contact@takanishi.mech.waseda.ac.jp
RI Hashimoto, Kenji/M-6442-2017
OI Hashimoto, Kenji/0000-0003-2300-6766
FU JSPS KAKENHI [24360099]; Japanese MEXT; JSPS Institutional Program for
Young Researcher Overseas Visits; SolidWorks Japan K. K.; DYDEN
Corporation; Cybernet Systems Co., Ltd.
FX This study was conducted as part of the Research Institute for Science
and Engineering, Waseda University, and as part of the humanoid project
at the Humanoid Robotics Institute, Waseda University. It was supported
in part by JSPS KAKENHI (Grant Number: 24360099), Grants for Excellent
Graduate Schools from Japanese MEXT and JSPS Institutional Program for
Young Researcher Overseas Visits. It was also partially supported by
SolidWorks Japan K. K., DYDEN Corporation and Cybernet Systems Co., Ltd.
whom we thank for their financial and technical support.
CR Arbulu M, 2007, IEEE-RAS INT C HUMAN, P563, DOI 10.1109/ICHR.2007.4813927
Buschmann T, 2007, IEEE-RAS INT C HUMAN, P1, DOI 10.1109/ICHR.2007.4813841
Goddard R. E., 1992, IEEE T SYST MAN CYBE, V22
Harada K., 2004, 4 IEEE RAS INT C HUM, V2, P640
Huang Q, 2001, IEEE T ROBOTIC AUTOM, V17, P280, DOI 10.1109/70.938385
KAJITA S, 1991, 1991 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS 1-3, P1405, DOI 10.1109/ROBOT.1991.131811
Kajita S., 2001, IEEE INT C ROB AUT I, V3
Kryczka P., 2013, IEEE INT C ROB BIOMI
Li Z., 2010, IEEE INT C ROB BIOMI
Minami Y., 2014, 5 IEEE RAS EMBS INT, P221
Miura K, 2011, IEEE INT C INT ROBOT, P4428, DOI 10.1109/IROS.2011.6048511
Morisawa M., 2006, 6 IEEE RAS INT C HUM, P581
Nishiwaki K, 2012, IEEE INT C INT ROBOT, P3432, DOI 10.1109/IROS.2012.6386056
OGURA Y, 2006, IEEE RSJ INT C INT R, P3976
Park JH, 2011, J MECH SCI TECHNOL, V25, P3231, DOI 10.1007/s12206-011-0918-6
Perry J, 1992, GAIT ANAL NORMAL PAT
Takanishi A., 1990, INT ROB SYST 90 NEW, V1, P323
Ugurlu B, 2011, IEEE INT C MECH ICM, P833
NR 18
TC 0
Z9 0
U1 1
U2 6
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2016
VL 13
IS 1
SI SI
AR 1650002
DI 10.1142/S021984361650002X
PG 25
WC Robotics
SC Robotics
GA DJ2CR
UT WOS:000374011800008
OA gold
DA 2018-01-22
ER

PT J
AU Vaillant, J
Kheddar, A
Audren, H
Keith, F
Brossette, S
Escande, A
Bouyarmane, K
Kaneko, K
Morisawa, M
Gergondet, P
Yoshida, E
Kajita, S
Kanehiro, F
AF Vaillant, Joris
Kheddar, Abderrahmane
Audren, Herve
Keith, Francois
Brossette, Stanislas
Escande, Adrien
Bouyarmane, Karim
Kaneko, Kenji
Morisawa, Mitsuharu
Gergondet, Pierre
Yoshida, Eiichi
Kajita, Suuji
Kanehiro, Fumio
TI Multi-contact vertical ladder climbing with an HRP-2 humanoid
SO AUTONOMOUS ROBOTS
LA English
DT Article
DE Humanoid robots; Multi-contact motion planning and control; Field
humanoid robots; Disaster humanoid robots
ID LEGGED ROBOTS; IMPLEMENTATION; OPTIMIZATION; EQUILIBRIUM; GENERATION;
FORCES
AB We describe the research and the integration methods we developed to make the
HRP-2 humanoid robot climb vertical industrial-norm ladders. We use our multi-
contact planner and multi-objective closed-loop control formulated as a QP
(quadratic program). First, a set of contacts to climb the ladder is planned off-
line (automatically or by the user). These contacts are provided as an input for a
finite state machine. The latter builds supplementary tasks that account for
geometric uncertainties and specific grasps procedures to be added to the QP
controller. The latter provides instant desired states in terms of joint
accelerations and contact forces to be tracked by the embedded low-level motor
controllers. Our trials revealed that hardware changes are necessary, and parts of
software must be made more robust. Yet, we confirmed that HRP-2 has the kinematic
and power capabilities to climb real industrial ladders, such as those found in
nuclear power plants and large scale manufacturing factories (e.g. aircraft,
shipyard) and construction sites.
C1 [Vaillant, Joris; Kheddar, Abderrahmane; Audren, Herve; Keith, Francois;
Brossette, Stanislas; Escande, Adrien; Bouyarmane, Karim; Kaneko, Kenji; Morisawa,
Mitsuharu; Gergondet, Pierre; Yoshida, Eiichi; Kajita, Suuji; Kanehiro, Fumio] CNRS
AIST Joint Robot Lab JRL, UMI3218, RL, Tsukuba, Ibaraki, Japan.
[Vaillant, Joris; Kheddar, Abderrahmane; Audren, Herve; Keith, Francois;
Brossette, Stanislas; Bouyarmane, Karim] CNRS UM2 LIRMM Interact Digital Human Grp,
UMR5506, Montpellier, France.
RP Kheddar, A (reprint author), CNRS AIST Joint Robot Lab JRL, UMI3218, RL,
Tsukuba, Ibaraki, Japan.; Kheddar, A (reprint author), CNRS UM2 LIRMM Interact
Digital Human Grp, UMR5506, Montpellier, France.
EM kheddar@gmail.com
RI Kajita, Shuuji/M-5010-2016; Kanehiro, Fumio/L-8660-2016; Yoshida,
Eiichi/M-3756-2016; KANEKO, Kenji/M-5360-2016; Morisawa,
Mitsuharu/M-3327-2016
OI Kajita, Shuuji/0000-0001-8188-2209; Kanehiro, Fumio/0000-0002-0277-3467;
Yoshida, Eiichi/0000-0002-3077-6964; KANEKO, Kenji/0000-0002-1888-8787;
Morisawa, Mitsuharu/0000-0003-0056-4335; Bouyarmane,
Karim/0000-0003-4284-0561
FU IS-AIST; JSPS Kakenhi [25280096]; EU FP7 KoroiBot Project
FX This work is supported partly by internal Grants from IS-AIST, the JSPS
Kakenhi B No. 25280096, and the EU FP7 KoroiBot Project www.koroibot.eu.
Part of this work was published in Vaillant et al. (2014).
CR Abe Y, 2007, SYMPOSIUM ON COMPUTER ANIMATION 2007: ACM SIGGRAPH/ EUROGRAPHICS
SYMPOSIUM PROCEEDINGS, P249
Audren H., 2014, IEEE RSJ INT C INT R
Bevly D.M., 2000, IEEE INT C ROB AUT S, P4010
Bouyarmane K., 2009, IEEE INT C ROB AUT, P1165
Bouyarmane K., 2012, IEEE RAS INT C HUM R
Bouyarmane K., 2011, IEEE RSJ INT C INT R
Bouyarmane K., 2010, IEEE RAS INT C HUM R, P8, DOI [10.1109/ICHR.2010.5686317,
DOI 10.1109/ICHR.2010.5686317]
Bouyarmane K, 2012, ADV ROBOTICS, V26, P1099, DOI 10.1080/01691864.2012.686345
Bretl T, 2006, INT J ROBOT RES, V25, P317, DOI 10.1177/0278364906063979
Bretl T, 2008, IEEE T ROBOT, V24, P794, DOI 10.1109/TRO.2008.2001360
Brossette S., 2014, IEEE RSJ INT C INT R
Brossette S, 2013, PROCEEDINGS OF THE 2013 6TH IEEE CONFERENCE ON ROBOTICS,
AUTOMATION AND MECHATRONICS (RAM), P19, DOI 10.1109/RAM.2013.6758553
Collette C, 2007, IEEE-RAS INT C HUMAN, P81, DOI 10.1109/ICHR.2007.4813852
da Silva M, 2008, COMPUT GRAPH FORUM, V27, P371, DOI 10.1111/j.1467-
8659.2008.01134.x
deLasa M., 2010, ACM T GRAPHICS SIGGR, V29, P1, DOI
[10.1016/j.neunet.2013.04.005, DOI 10.1145/1833351.1781157]
Eilering A., 2014, IEEE INT C ROB AUT
Escande A, 2014, INT J ROBOT RES, V33, P1006, DOI 10.1177/0278364914521306
Escande A, 2014, IEEE T ROBOT, V30, P666, DOI 10.1109/TRO.2013.2296332
Escande A, 2013, ROBOT AUTON SYST, V61, P428, DOI 10.1016/j.robot.2013.01.008
Escande A, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P435, DOI 10.1109/IROS.2009.5354371
Fujii S, 2008, IEEE INT CONF ROBOT, P3052
Gill P.E.E., 1986, 861 STANDF U
Hauser K, 2008, INT J ROBOT RES, V27, P1325, DOI 10.1177/0278364908098447
Herzog A., 2014, IEEE RSJ INT C INT R
Hyon SH, 2007, IEEE T ROBOT, V23, P884, DOI 10.1109/TRO.2007.904896
Ibanez A., 2014, IEEE INT C ROB AUT, P202
Iida H., 1989, J ROBOTICS MECHATRON, V1, P311
Kanehiro F., 2010, IEEE RSJ INT C INT R, P4069
Kanoun O, 2011, IEEE T ROBOT, V27, P785, DOI 10.1109/TRO.2011.2142450
Kuindersma S., 2014, IEEE INT C ROB AUT
Lee SH, 2012, AUTON ROBOT, V33, P399, DOI 10.1007/s10514-012-9294-z
Lengagne S, 2013, INT J ROBOT RES, V32, P1104, DOI 10.1177/0278364913478990
LIEGEOIS A, 1977, IEEE T SYST MAN CYB, V7, P868
Liu MX, 2012, IEEE T ROBOT, V28, P1309, DOI 10.1109/TRO.2012.2208829
Luo JR, 2014, IEEE INT CONF ROBOT, P2792, DOI 10.1109/ICRA.2014.6907259
Mansard N, 2009, IEEE T ROBOT, V25, P670, DOI 10.1109/TRO.2009.2020345
Mordatch I., 2010, ACM T GRAPHIC, V29, P3
Mordatch I, 2012, ACM T GRAPHIC, V31, DOI 10.1145/2185520.2185539
Murray RM, 1994, MATH INTRO ROBOTIC M
Nakai H., 2002, IEEE RSJ INT C INT R, P2025
NAKAMURA Y, 1987, INT J ROBOT RES, V6, P3, DOI 10.1177/027836498700600201
Noda S., 2014, IEEE INT C ROB AUT, P1775
Ott C., 2011, IEEE RAS INT C HUM R, P26
Posa M, 2014, INT J ROBOT RES, V33, P69, DOI 10.1177/0278364913506757
Righetti L., 2012, IEEE RAS INT C HUM R
Righetti L, 2013, INT J ROBOT RES, V32, P280, DOI 10.1177/0278364912469821
Saab L., 2012, IEEE T ROBOTICS, V29, P346
Salini J, 2011, IEEE INT CONF ROBOT, P1283, DOI 10.1109/ICRA.2011.5980202
Salini J, 2010, ADVANCES IN ROBOT KINEMATICS: MOTION IN MAN AND MACHINE, P177,
DOI 10.1007/978-90-481-9262-5_19
Schittkowski K., 1986, TECHNICAL REPORT
Sentis L, 2013, AUTON ROBOT, V35, P301, DOI 10.1007/s10514-013-9358-8
Sentis L, 2010, IEEE T ROBOT, V26, P483, DOI 10.1109/TRO.2010.2043757
Siciliano B, 1991, P IEEE INT C ADV ROB, V2, P1211, DOI DOI
10.1109/ICAR.1991.240390
Vaillant J., 2014, IEEE RAS INT C HUM R, P671
Wachter A, 2006, MATH PROGRAM, V106, P25, DOI 10.1007/s10107-004-0559-y
Wensing PM, 2013, IEEE INT CONF ROBOT, P3103, DOI 10.1109/ICRA.2013.6631008
Wieber P.B., 2002, IARP INT WORKSH HUM
Yoneda H, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT
SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P3579, DOI 10.1109/IROS.2008.4651212
Zhang Y, 2013, ADV MECH ENG, V2013, P1, DOI DOI 10.3864/J.ISSN.0578-
1752.2013.10.014
Zucker M, 2013, INT J ROBOT RES, V32, P1164, DOI 10.1177/0278364913488805
NR 60
TC 8
Z9 8
U1 0
U2 14
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 0929-5593
EI 1573-7527
J9 AUTON ROBOT
JI Auton. Robot.
PD MAR
PY 2016
VL 40
IS 3
SI SI
BP 561
EP 580
DI 10.1007/s10514-016-9546-4
PG 20
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA DF3KA
UT WOS:000371241500009
DA 2018-01-22
ER

PT J
AU Omer, A
Ghorbani, R
Hashimoto, K
Lim, HO
Takanishi, A
AF Omer, Aiman
Ghorbani, Reza
Hashimoto, Kenji
Lim, Hun-Ok
Takanishi, Atsuo
TI A Novel Design for Adjustable Stiffness Artificial Tendon for the Ankle
Joint of a Bipedal Robot: Modeling & Simulation
SO MACHINES
LA English
DT Article
DE bipedal walking; humanoid design; adjustable stiffness actuator; passive
dynamic walking
ID PASSIVE-DYNAMIC WALKING
AB Bipedal humanoid robots are expected to play a major role in the future.
Performing bipedal locomotion requires high energy due to the high torque that
needs to be provided by its legs' joints. Taking the WABIAN-2R as an example, it
uses harmonic gears in its joint to increase the torque. However, using such a
mechanism increases the weight of the legs and therefore increases energy
consumption. Therefore, the idea of developing a mechanism with adjustable
stiffness to be connected to the leg joint is introduced here. The proposed
mechanism would have the ability to provide passive and active motion. The
mechanism would be attached to the ankle pitch joint as an artificial tendon. Using
computer simulations, the dynamical performance of the mechanism is analytically
evaluated.
C1 [Omer, Aiman] Waseda Univ, Grad Sch Creat Sci & Engn, Tokyo 1620044, Japan.
[Ghorbani, Reza] Univ Hawaii Manoa, Dept Mech Engn, Honolulu, HI 96816 USA.
[Hashimoto, Kenji] Waseda Univ, Waseda Inst Adv Study, Tokyo 1620044, Japan.
[Lim, Hun-Ok] Univ Kanagawa, Dept Mech Engn, Yokohama, Kanagawa 2218686, Japan.
[Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst, Dept Mech Engn, Tokyo
1698555, Japan.
RP Omer, A (reprint author), Waseda Univ, Grad Sch Creat Sci & Engn, Tokyo 1620044,
Japan.
EM aimano@aoni.waseda.jp; rezag@hawaii.edu;
k-hashimoto@takanishi.mech.waseda.ac.jp; holim@kanagawa-u.ac.jp;
takanisi@waseda.jp
RI Hashimoto, Kenji/M-6442-2017
OI Hashimoto, Kenji/0000-0003-2300-6766
CR Biewener AA, 2004, J APPL PHYSIOL, V97, P2266, DOI
10.1152/japplphysiol.00003.2004
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Cho BK, 2011, ADV ROBOTICS, V25, P1209, DOI 10.1163/016918611X574687
Coleman MJ, 1998, PHYS REV LETT, V80, P3658, DOI 10.1103/PhysRevLett.80.3658
Collins SH, 2001, INT J ROBOT RES, V20, P607, DOI 10.1177/02783640122067561
Dickinson MH, 2000, SCIENCE, V288, P100, DOI 10.1126/science.288.5463.100
Fukuda T., 2012, MULTILOCOMOTION ROBO
Garcia MS, 1999, THESIS
Ishikawa M, 2005, J APPL PHYSIOL, V99, P603, DOI 10.1152/japplphysiol.00189.2005
Kajita S., 2008, HDB ROBOTICS, P361
Kajita S., 2014, INTRO HUMANOID ROBOT
Koganezawa K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2461, DOI 10.1109/IRDS.2002.1041638
MCGEER T, 1990, INT J ROBOT RES, V9, P62, DOI 10.1177/027836499000900206
Omer A. M. M., 2009, P IEEE ASME INT C AU, P1600
Omer A., 2012, P 2 IFTOMM AS C MECH
Omer A. M. M., 2005, P HUM 2005 C TSUK JA
Omer A. M. M., 2009, P 12 INT C CLIMB WAL, P653
Omer A. M. M., 2008, P 8 INT C HUM ROB DA, P541
ROSENBAUM DA, 2009, HUMAN MOTOR CONTROL
Rouse EJ, 2014, INT J ROBOT RES, V33, P1611, DOI 10.1177/0278364914545673
Sakagami Y, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2478, DOI 10.1109/IRDS.2002.1041641
Sugimoto N, 2012, NEURAL NETWORKS, V29-30, P8, DOI 10.1016/j.neunet.2012.01.002
Vanderborght B, 2010, SPRINGER TRAC ADV RO, V63, P1, DOI 10.1007/978-3-642-
13417-3
Verrelst B, 2005, ROBOTICA, V23, P149, DOI 10.1017/S0263574704000669
Winter DA, 2009, BIOMECHANICS MOTOR C
Wisse M, 2006, ADAPTIVE MOTION OF ANIMALS AND MACHINES, P143, DOI 10.1007/4-431-
31381-8_13
Wisse M., 2004, THESIS
NR 27
TC 1
Z9 1
U1 4
U2 4
PU MDPI AG
PI BASEL
PA ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND
SN 2075-1702
J9 MACHINES
JI Machines
PD MAR
PY 2016
VL 4
IS 1
AR 1
DI 10.3390/machines4010001
PG 22
WC Engineering, Mechanical
SC Engineering
GA EV1RH
UT WOS:000401523400001
OA gold
DA 2018-01-22
ER

PT J
AU Otani, T
Hashimoto, K
Isomichi, T
Sakaguchi, M
Kawakami, Y
Lim, HO
Takanishi, A
AF Otani, Takuya
Hashimoto, Kenji
Isomichi, Takaya
Sakaguchi, Masanori
Kawakami, Yasuo
Lim, Hun-Ok
Takanishi, Atsuo
TI Joint Mechanism That Mimics Elastic Characteristics in Human Running
SO MACHINES
LA English
DT Article
DE humanoid; running; joint stiffness; leaf spring
ID SPRING-MASS MODEL; STIFFNESS; ROBOT; RESONANCE; WALKING; ENERGY; ANKLE;
KNEE
AB Analysis of human running has revealed that the motion of the human leg can be
modeled by a compression spring because the joints of the leg behave like a torsion
spring in the stance phase. In this paper, we describe the development of a joint
mechanism that mimics the elastic characteristics of the joints of the stance leg.
The knee was equipped with a mechanism comprising two laminated leaf springs made
of carbon fiber-reinforced plastic for adjusting the joint stiffness and a worm
gear in order to achieve active movement. Using this mechanism, we were able to
achieve joint stiffness mimicking that of a human knee joint that can be adjusted
by varying the effective length of one of the laminated leaf springs. The equation
proposed for calculating the joint stiffness considers the difference between the
position of the fixed point of the leaf spring and the position of the rotational
center of the joint. We evaluated the performance of the laminated leaf spring and
the effectiveness of the proposed equation for joint stiffness. We were able to
make a bipedal robot run with one leg using pelvic oscillation for storing energy
produced by the resonance related to leg elasticity.
C1 [Otani, Takuya; Isomichi, Takaya] Waseda Univ, Grad Sch Sci & Engn, Shinjuku Ku,
41-304,17 Kikui Cho, Tokyo 1620044, Japan.
[Hashimoto, Kenji] Waseda Univ, Waseda Inst Adv Study, Shinjuku Ku, 41-304,17
Kikui Cho, Tokyo 1620044, Japan.
[Hashimoto, Kenji; Lim, Hun-Ok; Takanishi, Atsuo] Waseda Univ, Humanoid Robot
Inst HRI, Shinjuku Ku, 2-2 Wakamatsu Cho, Tokyo 1628480, Japan.
[Sakaguchi, Masanori] Univ Calgary, Fac Kinesiol, 2500 Univ Dr NW, Calgary, AB
T2N 1N4, Canada.
[Sakaguchi, Masanori; Kawakami, Yasuo] Waseda Univ, Fac Sport Sci, 2-579-15
Mikajima, Tokorozawa, Tokyo 3591192, Japan.
[Lim, Hun-Ok] Kanagawa Univ, Fac Engn, Kanagawa Ku, 3-27-1 Rokkakubashi,
Yokohama, Kanagawa 2218686, Japan.
[Takanishi, Atsuo] Waseda Univ, Dept Modern Mech Engn, Shinjuku Ku, 2-2
Wakamatsu Cho, Tokyo 1628480, Japan.
RP Otani, T (reprint author), Waseda Univ, Grad Sch Sci & Engn, Shinjuku Ku, 41-
304,17 Kikui Cho, Tokyo 1620044, Japan.
EM takaya_isomichi@akane.waseda.jp;
k-hashimoto@takanishi.mech.waseda.ac.jp; takanisi@waseda.jp;
msakaguc@ucalgary.ca; ykawa@waseda.jp; holim@kanagawa-u.ac.jp;
t-otani@takanishi.mech.waseda.ac.jp
RI Hashimoto, Kenji/M-6442-2017
OI Hashimoto, Kenji/0000-0003-2300-6766
FU Research Institute for Science and Engineering, Waseda University;
Institute of Advanced Active Aging Research, Waseda University; Humanoid
Robotics Institute, Waseda University; MEXT/JSPS KAKENHI [25220005,
25709019]; Mizuho Foundation for the Promotion of Sciences; SolidWorks
Japan K.K; DYDEN Corporation; Cybernet Systems Co., Ltd.; TohoTenax Co.,
Ltd.
FX This study was conducted with the support of the Research Institute for
Science and Engineering, Waseda University, the Institute of Advanced
Active Aging Research, Waseda University, and as part of the humanoid
project at the Humanoid Robotics Institute, Waseda University. It was
also supported in part by the MEXT/JSPS KAKENHI Grant Nos. 25220005 and
25709019, the Mizuho Foundation for the Promotion of Sciences,
SolidWorks Japan K.K., DYDEN Corporation, Cybernet Systems Co., Ltd.,
and TohoTenax Co., Ltd. We thank all of them for the financial and
technical support provided.
CR Ae M, 1992, BIOMECHANISMS, V11, P23
BLICKHAN R, 1989, J BIOMECH, V22, P1217, DOI 10.1016/0021-9290(89)90224-8
CHAPMAN AE, 1983, J BIOMECH, V16, P69, DOI 10.1016/0021-9290(83)90047-7
Cho B. K., 2009, P IEEE RAS INT C HUM, P385
Dalleau G, 1998, EUR J APPL PHYSIOL O, V77, P257, DOI 10.1007/s004210050330
Endo T., 2008, RES PHYS ED, V53, P477
Ferber R, 2003, CLIN BIOMECH, V18, P350, DOI 10.1016/S0268-0033(03)00025-1
Grimmer M., 2015, THESIS
Grizzle JW, 2009, P AMER CONTR CONF, P2030, DOI 10.1109/ACC.2009.5160550
Gunther M, 2002, J BIOMECH, V35, P1459, DOI 10.1016/S0021-9290(02)00183-5
Hashimoto K, 2014, SCI WORLD J, DOI 10.1155/2014/259570
Hashimoto K, 2013, ADV ROBOTICS, V27, P541, DOI 10.1080/01691864.2013.777015
HINRICHS RN, 1987, INT J SPORT BIOMECH, V3, P242, DOI 10.1123/ijsb.3.3.242
Hitt J., 2010, J MED DEVICES, P4
Kouchi M., 2005, HUMAN DIMENSION DATA
Kuitunen S, 2002, MED SCI SPORT EXER, V34, P166, DOI 10.1097/00005768-200201000-
00025
MCMAHON TA, 1990, J BIOMECH, V23, P65, DOI 10.1016/0021-9290(90)90042-2
Merritt H. E., 1982, P I MECH ENG, V129, P127
Morita T, 1996, IEEE INT CONF ROBOT, P2902, DOI 10.1109/ROBOT.1996.509153
Moro FL, 2014, AUTON ROBOT, V36, P331, DOI 10.1007/s10514-013-9357-9
Nagasaki T.;, 2004, P IEEE RSJ INT C INT, P136
Niiyama R, 2012, ADV ROBOTICS, V26, P383, DOI 10.1163/156855311X614635
Novacheck TF, 1998, GAIT POSTURE, V7, P77, DOI 10.1016/S0966-6362(97)00038-6
Ogura Y, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-12, P3976, DOI 10.1109/IROS.2006.281834
Otani T., 2015, P IEEE RSJ INT C INT
Otani T., 2015, FRONT ROBOT
Renjewski D, 2009, 2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
BIOMIMETICS (ROBIO 2009), VOLS 1-4, P1894, DOI 10.1109/ROBIO.2009.5420535
Schache AG, 2011, MED SCI SPORT EXER, V43, P1260, DOI
10.1249/MSS.0b013e3182084929
Tajima R., 2009, P IEEE INT C ROB AUT, P1571
Takenaka T., 2011, JRSJ, V29, P93
Tamada T., 2014, P IEEE INT C HUM ROB, P140
Tsagarakis N. G., 2009, P IEEE INT C ROB AUT, P4356
Ugurlu B, 2012, IEEE INT CONF ROBOT, P1436, DOI 10.1109/ICRA.2012.6224909
Vu H. Q., 2015, IEEE ASME T MECHATRO
Waycaster G., 2011, J MED DEVICES, P5
World Medical Association, 1964, DECL HELS
Yuan K, 2011, P 18 IFAC WORLD C, V44, P2865
NR 37
TC 0
Z9 0
U1 1
U2 1
PU MDPI AG
PI BASEL
PA ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND
SN 2075-1702
J9 MACHINES
JI Machines
PD MAR
PY 2016
VL 4
IS 1
AR 5
DI 10.3390/machines4010005
PG 15
WC Engineering, Mechanical
SC Engineering
GA EV1RH
UT WOS:000401523400005
OA gold
DA 2018-01-22
ER

PT J
AU Izawa, K
Hyon, SH
AF Izawa, Kensuke
Hyon, Sang-Ho
TI Prototyping Force-Controlled 3-DOF Hydraulic Arms for Humanoid Robots
SO JOURNAL OF ROBOTICS AND MECHATRONICS
LA English
DT Article
DE hydraulic robot; dual arm; force/torque control; singular perturbation
AB This paper reports on a hydraulic dual arm robot developed as a rapid prototype
for our hydraulic humanoid robot. The lightweight arms (4 kg each) have three
joints driven by hydraulic linear servo actuators that can achieve higher torque
and speed than human arms. A double four-bar linkage provides a wide range of
motion (210 degrees) to the shoulder joint. Each joint has torque controllability
that is fully utilized for compliant whole-body motion control tasks. Based on
singular perturbation analysis, we discuss how damping on the joints is actively
modulated by hydraulic force feedback control, which is then utilized in our
passivity-based task-space force control scheme. The effectiveness of the proposed
system is experimentally evaluated through zero-force tracking gravity compensation
with a 10 kg payload and object manipulation tasks.
C1 [Izawa, Kensuke; Hyon, Sang-Ho] Ritsumeikan Univ, Coll Sci & Engn, 1-1-1
Nojihigashi, Kusatsu, Shiga 5258577, Japan.
RP Izawa, K (reprint author), Ritsumeikan Univ, Coll Sci & Engn, 1-1-1 Nojihigashi,
Kusatsu, Shiga 5258577, Japan.
EM gen@fc.ritsumei.ac.jp
FU [26280098]
FX This study was supported by the Grants-in-aid for Scientific Research -
Basic Research (B) 26280098. We would also like to express our gratitude
to Ibaraki Industries Co., Ltd. and Toyo Instruments Co., Ltd. for their
cooperation.
CR Albu-Schaffer A, 2007, INT J ROBOT RES, V26, P23, DOI 10.1177/0278364907073776
Arimoto S, 2005, ADV ROBOTICS, V19, P401, DOI 10.1163/1568553053662555
Baba T., 2000, PHYS ED STUDIES, V45, P186
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Fuchs M., 2009, IEEE INT C ROB AUT I, P4131, DOI DOI 10.1109/ROBOT.2009.5152464
Hyon S., 2008, IEEE RAS ICHR, P493
Hyon SH, 2007, IEEE T ROBOT, V23, P884, DOI 10.1109/TRO.2007.904896
Hyon SH, 2013, IEEE INT C INT ROBOT, P4655, DOI 10.1109/IROS.2013.6697026
Hyon SH, 2009, IEEE-ASME T MECH, V14, P677, DOI 10.1109/TMECH.2009.2033117
[Anonymous], 2003, DLR LIGHT WEIGHT ROO
Ito Y, 2014, IEEE INT CONF ROBOT, P3433, DOI 10.1109/ICRA.2014.6907353
Khalil H. K., 1996, NONLINEAR SYSTEMS
Maeda M., 2010, RESERCH Q ATHLETICS, V80, P13
Merrit H. E., 1967, HYDRAULIC CONTROL SY
National Institute of Technology and Evaluation of Japan, DAT BOOK HUM PHYS CH
Ott C, 2006, IEEE-RAS INT C HUMAN, P276, DOI 10.1109/ICHR.2006.321397
Rethink robotics, 2014, HARDW SPEC
Suewaka D., 2013, RSJ 2013
Takahashi H., 2014, TOSHIBA REV, V69
[Anonymous], 2012, NEW GEN ROB MOT SDA
NR 20
TC 1
Z9 1
U1 0
U2 3
PU FUJI TECHNOLOGY PRESS LTD
PI TOKYO
PA 4F TORANOMON SANGYO BLDG, 2-29, TORANOMON 1-CHOME, MINATO-KU, TOKYO,
105-0001, JAPAN
SN 0915-3942
EI 1883-8049
J9 J ROBOT MECHATRON
JI J. Robot. Mechatron.
PD FEB
PY 2016
VL 28
IS 1
BP 95
EP 103
DI 10.20965/jrm.2016.p0095
PG 9
WC Robotics
SC Robotics
GA EJ8DF
UT WOS:000393454200011
DA 2018-01-22
ER

PT J
AU Yamazaki, K
Oya, R
Nagahama, K
Okada, K
Inaba, M
AF Yamazaki, Kimitoshi
Oya, Ryosuke
Nagahama, Kotaro
Okada, Kei
Inaba, Masayuki
TI Bottom Dressing by a Dual-arm Robot Using a Clothing State Estimation
Based on Dynamic Shape Changes
SO INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
LA English
DT Article
DE Dynamic State Estimation; Clothing Manipulation; Autonomous Robot
AB This paper describes an autonomous robot's method of dressing a subject in
clothing. Our target task is to dress a person in the sitting pose. We especially
focus on the action whereby a robot automatically pulls a pair of trousers up the
subject's legs, an action frequently needed in dressing assistance. To avoid
injuring the subject's legs, the robot should be able to recognize the state of the
manipulated clothing. Therefore, while handling the clothing, the robot is supplied
with both visual and tactile sensory information. A dressing failure is detected by
the visual sensing of the behaviour of optical flows extracted from the clothing's
movements. The effectiveness of the proposed approach is implemented and validated
in a life-sized humanoid robot.
C1 [Yamazaki, Kimitoshi] Shinshu Univ, Nagano, Japan.
[Oya, Ryosuke; Nagahama, Kotaro; Okada, Kei; Inaba, Masayuki] Univ Tokyo, Tokyo,
Japan.
RP Yamazaki, K (reprint author), Shinshu Univ, Nagano, Japan.
EM kyamazaki@shinshu-u.ac.jp
FU JST PRESTO programme; JSPS KAKENHI [26700024]
FX This work was partly supported by the JST PRESTO programme and JSPS
KAKENHI Grant Number 26700024.
CR Boykov Y, 2001, IEEE T PATTERN ANAL, V23, P1222, DOI 10.1109/34.969114
Comaniciu D., 2000, P IEEE C COMP VIS PA, V2, P142, DOI DOI
10.1109/CVPR.2000.854761
Dudgeon BJ, 2008, ARCH PHYS MED REHAB, V89, P1256, DOI
10.1016/j.apmr.2007.11.038
Farneback G, 2003, LECT NOTES COMPUT SC, V2749, P363
FISCHLER MA, 1981, COMMUN ACM, V24, P381, DOI 10.1145/358669.358692
Hamajima K., 2000, P INT C ROB SYST, P1237
Kita Y., 2004, P INT C PATT REC, V4, P3889
Kita Y, 2010, IEEE INT C INT ROBOT, P2710, DOI 10.1109/IROS.2010.5651222
Kohli P., 2005, P IEEE INT C COMP VI, V2, P922
Kanade T., 1981, P INT JOINT C ART IN, P674
Maitin-Shepard J, 2010, IEEE INT CONF ROBOT, P2308, DOI
10.1109/ROBOT.2010.5509439
Matsubara T, 2013, ADV ROBOTICS, V27, P513, DOI 10.1080/01691864.2013.777012
Miller S., 2011, P INT C ROB AUT MAY, P4861
Okada K, 2004, IEEE INT CONF ROBOT, P3207, DOI 10.1109/ROBOT.2004.1308748
Ono E, 1990, P 1990 JAP US S FLEX, P1363
Osawa F., 2007, J ADV COMPUTATIONAL, V11, P457
Shi J., 1994, P IEEE C COMP VIS PA, P593, DOI DOI 10.1109/CVPR.1994.323794
Wahl E, 2003, FOURTH INTERNATIONAL CONFERENCE ON 3-D DIGITAL IMAGING AND
MODELING, PROCEEDINGS, P474, DOI 10.1109/IM.2003.1240284
Willimon B, 2011, IEEE INT CONF ROBOT, P1862
NR 19
TC 1
Z9 1
U1 1
U2 13
PU SAGE PUBLICATIONS INC
PI THOUSAND OAKS
PA 2455 TELLER RD, THOUSAND OAKS, CA 91320 USA
SN 1729-8814
J9 INT J ADV ROBOT SYST
JI Int. J. Adv. Robot. Syst.
PD JAN 18
PY 2016
VL 13
AR 5
DI 10.5772/61930
PG 13
WC Robotics
SC Robotics
GA DB6NA
UT WOS:000368630500001
OA gold
DA 2018-01-22
ER

PT J
AU Wu, MH
Ogawa, S
Konno, A
AF Wu, Meng-Hung
Ogawa, Shuhei
Konno, Atsushi
TI Symmetry position/force hybrid control for cooperative object
transportation usingmultiple humanoid robots
SO ADVANCED ROBOTICS
LA English
DT Article
DE Humanoid robots; force control; symmetric control; cooperative movement
AB A symmetry position/force hybrid control framework for cooperative object
transportation tasks with multiple humanoid robots is proposed in this paper. In a
leader-follower type cooperation, follower robots plan their biped gaits based on
the forces generated at their hands after a leader robot moves. Therefore, if the
leader robot moves fast (rapidly pulls or pushes the carried object), some of the
follower humanoid robots may lose their balance and fall down. The symmetry type
cooperation discussed in this paper solves this problem because it enables all
humanoid robots to move synchronously. The proposed framework is verified by
dynamic simulations.
C1 [Wu, Meng-Hung; Ogawa, Shuhei; Konno, Atsushi] Hokkaido Univ, Grad Sch Informat
Sci & Technol, Div Syst Sci & Informat, Kita Ku, Kita 14,Nishi 9, Sapporo, Hokkaido
0600814, Japan.
RP Konno, A (reprint author), Hokkaido Univ, Grad Sch Informat Sci & Technol, Div
Syst Sci & Informat, Kita Ku, Kita 14,Nishi 9, Sapporo, Hokkaido 0600814, Japan.
EM konno@ssi.ist.hokudai.ac.jp
FU JSPS [26540130]
FX This work was supported by the JSPS Grant-in-Aid for Challenging
Exploratory Research [No. 26540130].
CR Agravante DJ, 2014, IEEE INT C ROB AUT, P607
Babazadeh A, 2004, 2004 IEEE C ROB AUT, V1, P312
Caccavale F, 2008, SPRINGER HDB ROBOTIC, P701
Erhart S, 2013, IEEE INT C INT ROBOT, P315, DOI 10.1109/IROS.2013.6696370
Hayati S, 1986, IEEE T ROBOTIC AUTOM, P82
Heck D, 2013, 2013 EUROPEAN CONTROL CONFERENCE (ECC), P2299
Hirata Y, 2004, PROCEEDINGS OF THE 2004 INTERNATIONAL CONFERENCE ON INTELLIGENT
MECHATRONICS AND AUTOMATION, P362
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Khatib O, 1996, INT S ROB RES LOND E, P333
KUME Y, 2002, IROS2002, P2758
LI ZJ, 2014, IEEE INT C INF AUT H, P665
Munawar K, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P1213, DOI 10.1109/ROBOT.1999.772527
Nakamura Y, 1987, IEEE T ROBOTIC AUTOM, P991
Nakano E., 1974, P 4 INT S IND ROB, P251
Pagilla P, 1994, AM CONTR C, P195
Pratt G, 2013, IEEE ROBOT AUTOM MAG, V20, P10, DOI 10.1109/MRA.2013.2255424
Prattichizzo D., 2008, SPRINGER HDB ROBOTIC, P671
Rastegari R, 2006, IEEE INT C CONTR APP, P2872
Rosales C, 2013, IEEE T ROBOT, V29, P746, DOI 10.1109/TRO.2013.2244785
SCHNEIDER SA, 1992, IEEE T ROBOTIC AUTOM, V8, P383, DOI 10.1109/70.143355
Szewczyk J, 2002, J ROBOTIC SYST, V19, P283, DOI 10.1002/rob.10041
Toni M, 2013, IEEE INT S IND EL, P1
Uchiyama M., 1993, Advanced Robotics, V7, P361
Vukobratovic M, 2001, IEEE RAS INT C HUM R, P237
WILLIAMS D, 1993, PROCEEDINGS : IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-3, P1025, DOI 10.1109/ROBOT.1993.292110
Wu M, 2011, P 2011 IEEE SICE INT, P779
Wu MH, 2014, IEEE INT CONF ROBOT, P3446, DOI 10.1109/ICRA.2014.6907355
Yokoyama K, 2003, IEEE INT CONF ROBOT, P2985
Yoshikawa T, 1991, 5 INT C ADV ROB CINC, V1, P579
NR 31
TC 2
Z9 2
U1 0
U2 8
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD JAN 17
PY 2016
VL 30
IS 2
BP 131
EP 149
DI 10.1080/01691864.2015.1096212
PG 19
WC Robotics
SC Robotics
GA DB7MC
UT WOS:000368698500005
DA 2018-01-22
ER

PT J
AU Yogeeswaran, K
Zlotowski, J
Livingstone, M
Bartneck, C
Sumioka, H
Ishiguro, H
AF Yogeeswaran, Kumar
Zlotowski, Jakub
Livingstone, Megan
Bartneck, Christoph
Sumioka, Hidenobu
Ishiguro, Hiroshi
TI The Interactive Effects of Robot Anthropomorphism and Robot Ability on
Perceived Threat and Support for Robotics Research
SO JOURNAL OF HUMAN-ROBOT INTERACTION
LA English
DT Article
DE human-robot interaction; anthropomorphism; ability; threat; uncanny
valley
ID UNCANNY VALLEY; PERCEPTION; DISTINCTIVENESS
AB The present research examines how a robot's physical anthropomorphism interacts
with perceived ability of robots to impact the level of realistic and identity
threat that people perceive from robots and how it affects their support for
robotics research. Experimental data revealed that participants perceived robots to
be significantly more threatening to humans after watching a video of an android
that could allegedly outperform humans on various physical and mental tasks
relative to a humanoid robot that could do the same. However, when participants
were not provided with information about a new generation of robots' ability
relative to humans, then no significant differences were found in perceived threat
following exposure to either the android or humanoid robots. Similarly,
participants also expressed less support for robotics research after seeing an
android relative to a humanoid robot outperform humans. However, when provided with
no information about robots' ability relative to humans, then participants showed
marginally decreased support for robotics research following exposure to the
humanoid relative to the android robot. Taken together, these findings suggest that
very humanlike robots can not only be perceived as a realistic threat to human
jobs, safety, and resources, but can also be seen as a threat to human identity and
uniqueness, especially if such robots also outperform humans. We also demonstrate
the potential downside of such robots to the public's willingness to support and
fund robotics research.
C1 [Yogeeswaran, Kumar; Livingstone, Megan] Univ Canterbury, Dept Psychol,
Christchurch, New Zealand.
[Zlotowski, Jakub; Bartneck, Christoph] Univ Canterbury, Human Interface Technol
Lab, Christchurch, New Zealand.
[Sumioka, Hidenobu; Ishiguro, Hiroshi] Adv Telecommun Res Inst Int, Hiroshi
Ishiguro Lab, Kyoto, Japan.
[Ishiguro, Hiroshi] Osaka Univ, Grad Sch Engn Sci, Dept Syst Innovat, Suita,
Osaka, Japan.
RP Yogeeswaran, K (reprint author), Univ Canterbury, Dept Psychol, Christchurch,
New Zealand.
EM kumar.yogeeswaran@canterbury.ac.nz; jakub.zlotowski@pg.canterbury.ac.nz;
mgl44@uclive.ac.nz; christoph.bartneck@canterbury.ac.nz; sumioka@atr.jp;
ishiguro@sys.es.osaka-u.ac.jp
OI Zlotowski, Jakub/0000-0002-5020-4365; Bartneck,
Christoph/0000-0003-4566-4815
CR Aggarwal P, 2007, J CONSUM RES, V34, P468, DOI 10.1086/518544
Asendorpf JB, 2013, EUR J PERSONALITY, V27, P108, DOI 10.1002/per.1919
Bartneck C., 2010, PALADYN, P1
Bartneck C, 2007, P 16 IEEE INT S ROB, P368, DOI [DOI
10.1109/R0MAN.2007.4415111, DOI 10.1109/ROMAN.2007.4415111]
Bartneck C., 2007, P ACM IEEE INT C HUM, P81
Bartneck C, 2008, 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, VOLS 1 AND 2, P553, DOI 10.1109/ROMAN.2008.4600724
Bethel C. L., 2009, P 4 ACM IEEE INT C H, P291
Butz DA, 2011, J EXP SOC PSYCHOL, V47, P22, DOI 10.1016/j.jesp.2010.07.014
Duffy BR, 2003, ROBOT AUTON SYST, V42, P177, DOI 10.1016/S0921-8890(02)00374-3
Fasola J, 2012, P IEEE, V100, P2512, DOI 10.1109/JPROC.2012.2200539
Feil-Seifer D, 2011, ACMIEEE INT CONF HUM, P323, DOI 10.1145/1957656.1957785
Giullian N., 2010, P IEEE INT C SYST MA, P2595
GOETZ J., 2003, IEEE INT WORKSH ROB, V12, P55
Gray K, 2012, COGNITION, V125, P125, DOI 10.1016/j.cognition.2012.06.007
Hancock PA, 2011, HUM FACTORS, V53, P517, DOI 10.1177/0018720811417254
Haring KS, 2014, LECT NOTES ARTIF INT, V8755, P166, DOI 10.1007/978-3-319-11973-
1_17
Hewstone M, 2002, ANNU REV PSYCHOL, V53, P575, DOI
10.1146/annurev.psych.53.100901.135109
Jetten J, 1998, J PERS SOC PSYCHOL, V74, P1481, DOI 10.1037/0022-3514.74.6.1481
Jetten J, 1997, EUR J SOC PSYCHOL, V27, P635, DOI 10.1002/(SICI)1099-
0992(199711/12)27:6<635::AID-EJSP835>3.0.CO;2-#
Kanda T., 2005, P IEEE RSJ INT C INT, P62
Katsyri J, 2015, FRONT PSYCHOL, V6, DOI 10.3389/fpsyg.2015.00390
Kiesler S, 2004, HUM-COMPUT INTERACT, V19, P1, DOI 10.1207/s15327051hci1901&2_1
King RD, 2004, NATURE, V427, P247, DOI 10.1038/nature02236
Lee H. R., 2014, P 2014 ACM IEEE INT, P17, DOI DOI 10.1145/2559636.2559676
LeVine R. A., 1972, ETHNOCENTRISM THEORI
Lewis T., 2015, LIVE SCI
Lohse M., 2011, P 20 IEEE INT S ROB, P485
MacDorman KF, 2006, INTERACT STUD, V7, P297, DOI 10.1075/is.7.3.03mac
MacDorman KF, 2016, COGNITION, V146, P190, DOI 10.1016/j.cognition.2015.09.019
MacDorman KF, 2013, COMPUT HUM BEHAV, V29, P1671, DOI 10.1016/j.chb.2013.01.051
MacDorman K. F., 2008, AI SOC, V23, P485, DOI [10.1007/s00146-008-0181-2, DOI
10.1007/S00146-008-0181-2]
Maddux WW, 2008, PERS SOC PSYCHOL B, V34, P74, DOI 10.1177/0146167207309195
Mori M., 1970, ENERGY, V7, P33, DOI DOI 10.1109/MRA.2012.2192811
Resnick B., 2016, VOX
Riek BM, 2006, PERS SOC PSYCHOL REV, V10, P336, DOI 10.1207/s15327957pspr1004_4
Riek L. D., 2009, P 3 INT C AFF COMP I, P1, DOI [10.1109/ACII.2009.5349423, DOI
10.1109/ACII.2009.5349423]
Ripley W, 2014, CNN
Rosenthal-von der Puetten AM, 2015, INT J SOC ROBOT, V7, P799, DOI
10.1007/s12369-015-0321-z
Rosenthal-von der Putten AM, 2014, COMPUT HUM BEHAV, V36, P422, DOI
10.1016/j.chb.2014.03.066
Sauppe A., 2015, P 33 ANN ACM C HUM F, P3613
Saygin AP, 2012, SOC COGN AFFECT NEUR, V7, P413, DOI 10.1093/scan/nsr025
Sherif M., 1961, INTERGROUP CONFLICT
Spears R, 2002, SELF MOTIVATION EMER, P141
Stephan WG, 1999, J APPL SOC PSYCHOL, V29, P2221, DOI 10.1111/j.1559-
1816.1999.tb00107.x
Stewart J., 2011, BBC NEWS
Syrdal D. S., 2008, P AAAI FALL S TECHN, VFS-08-02, P116
Tajfel H., 1986, PSYCHOL INTERGROUP R, P7, DOI DOI 10.1089/CPB.2008.0150
Wade E., 2011, P IEEE RSJ INT C INT, P2403
Wang L., 2010, P 5 ACM IEEE INT C H, P359, DOI DOI 10.1145/1734454.1734578
Wang SS, 2015, REV GEN PSYCHOL, V19, P393, DOI 10.1037/gpr0000056
Waugh R., 2015, STEPHEN HAWKING WARN
Yogeeswaran K, 2014, J PERS SOC PSYCHOL, V106, P772, DOI 10.1037/a0035830
Yogeeswaran K, 2012, EUR J SOC PSYCHOL, V42, P691, DOI 10.1002/ejsp.1894
Zlotowski J., 2013, P HRI2013 WORKSH DES, P7
Zlotowski J, 2015, INT J SOC ROBOT, V7, P347, DOI 10.1007/s12369-014-0267-6
Zlotowski JA, 2015, FRONT PSYCHOL, V6, DOI 10.3389/fpsyg.2015.00883
NR 56
TC 2
Z9 2
U1 0
U2 0
PU JOURNAL HUMAN-ROBOT INTERACTION
PI STANFORD
PA JOURNAL HUMAN-ROBOT INTERACTION, STANFORD, CA 00000 USA
SN 2163-0364
J9 J HUM-ROBOT INTERACT
JI J. Hum.-Robot Interact.
PY 2016
VL 5
IS 2
BP 29
EP 47
DI 10.5898/JHRI.5.2.Yogeeswaran
PG 19
WC Robotics
SC Robotics
GA FA3BG
UT WOS:000405315400002
OA gold
DA 2018-01-22
ER

PT J
AU Matsuda, G
Hiraki, K
Ishiguro, H
AF Matsuda, Goh
Hiraki, Kazuo
Ishiguro, Hiroshi
TI EEG-Based Mu Rhythm Suppression to Measure the Effects of Appearance and
Motion on Perceived Human Likeness of a Robot
SO JOURNAL OF HUMAN-ROBOT INTERACTION
LA English
DT Article
DE humanoid robot; android; EEG; mirror neuron; mu suppression
ID MIRROR-NEURON SYSTEM; LIGHT BIOLOGICAL MOTION; HUMAN PREMOTOR CORTEX;
ACTION RECOGNITION; PERCEPTION; FMRI; BRAIN; IMITATION; RESPONSES;
EMPATHY
AB We performed two electroencephalogram (EEG) experiments to examine how humanoid
robot appearance and motion affect human subjects' perception of the robots. Mu
rhythm suppression of EEG, which is considered to reflect mirror neuron system
(MNS) activation, was regarded as a neurological indicator of perceived human
likeness. Video clips depicting upper-body actions performed by three agents
(human, android, and mechanical robot) were presented in experiment 1. In
experiment 2, point-light motion (PLM) stimuli generated from experiment 1 stimuli
were presented. The results of experiment 1 revealed that only the human and
android actions evoked significant mu suppression. No PLM stimulus elicited
significant mu suppression in experiment 2. These findings suggest that an overall
human-like form is necessary to activate the MNS.
C1 [Matsuda, Goh] Kyoto Prefectural Univ Med, Kyoto, Japan.
[Hiraki, Kazuo] Univ Tokyo, CREST JST, Tokyo, Japan.
[Ishiguro, Hiroshi] Osaka Univ, Adv Telecommun Res, Osaka, Japan.
RP Hiraki, K (reprint author), Univ Tokyo, Grad Sch Arts & Sci, Tokyo, Japan.
EM khiraki@idea.c.u-tokyo.ac.jp
FU Japan Society for the Promotion of Science KAKENHI [20220002, 25220004]
FX This work was supported by Japan Society for the Promotion of Science
KAKENHI 20220002 and 25220004.
CR Arita A, 2005, COGNITION, V95, pB49, DOI 10.1016/j.cognition.2004.08.001
Buccino G, 2004, BRAIN LANG, V89, P370, DOI 10.1016/S0093-934X(03)00356-0
Buccino G, 2001, EUR J NEUROSCI, V13, P400, DOI 10.1046/j.1460-9568.2001.01385.x
Chaminade T, 2010, PLOS ONE, V5, DOI 10.1371/journal.pone.0011577
Cochin S, 1999, EUR J NEUROSCI, V11, P1839, DOI 10.1046/j.1460-9568.1999.00598.x
Desy MC, 2013, NEUROSCI RES, V77, P58, DOI 10.1016/j.neures.2013.08.003
Gazzola V, 2007, NEUROIMAGE, V35, P1674, DOI 10.1016/j.neuroimage.2007.02.003
Gobbini MI, 2011, J COGNITIVE NEUROSCI, V23, P1911, DOI 10.1162/jocn.2010.21574
Hamzei F, 2003, NEUROIMAGE, V19, P637, DOI 10.1016/S1053-8119(03)00087-9
Hirai M, 2003, NEUROSCI LETT, V344, P41, DOI 10.1016/S0304-3940(03)00413-0
Hirai M, 2007, BRAIN RES, V1165, P105, DOI 10.1016/j.brainres.2007.05.078
Iacoboni M, 2009, ANNU REV PSYCHOL, V60, P653, DOI
10.1146/annurev.psych.60.110707.163604
Ishiguro H, 2001, IND ROBOT, V28, P498, DOI 10.1108/01439910110410051
Ishiguro H, 2006, CONNECT SCI, V18, P319, DOI 10.1080/09540090600873953
Jackson PL, 2006, NEUROPSYCHOLOGIA, V44, P752, DOI
10.1016/j.neuropsychologia.2005.07.015
Jarvelainen J, 2001, NEUROREPORT, V12, P3493
JOHANSSON G, 1973, PERCEPT PSYCHOPHYS, V14, P201, DOI 10.3758/BF03212378
Kaneko K., 2009, P 9 IEEE RAS INT C H, DOI [10.1109/ICHR.2009.5379537, DOI
10.1109/ICHR.2009.5379537]
Krach S, 2008, PLOS ONE, V3, DOI 10.1371/journal.pone.0002597
Matsuda G., 2012, COGNITIVE STUDIES B, V19, P434
Minato T, 2006, ADV ROBOTICS, V20, P1147, DOI 10.1163/156855306778522505
Miura N, 2010, SOC NEUROSCI-UK, V5, P40, DOI 10.1080/17470910903083256
Muthukumaraswamy SD, 2004, COGNITIVE BRAIN RES, V19, P195, DOI
10.1016/j.cogbrainres.2003.12.001
Nishio S, 2007, HUMANOID ROBOTS NEW, P343, DOI DOI 10.5772/4876
Oberman LM, 2007, NEUROCOMPUTING, V70, P2194, DOI 10.1016/j.neucom.2006.02.024
Oztop E., 2005, INT J HUM ROBOT, V2, P537, DOI DOI 10.1142/S0219843605000582
Perry A, 2009, BRAIN RES, V1282, P126, DOI 10.1016/j.brainres.2009.05.059
Pineda JA, 2005, BRAIN RES REV, V50, P57, DOI 10.1016/j.brainresrev.2005.04.005
Press C, 2011, NEUROSCI BIOBEHAV R, V35, P1410, DOI
10.1016/j.neubiorev.2011.03.004
Rizzolatti G, 2004, ANNU REV NEUROSCI, V27, P169, DOI
10.1146/annurev.neuro.27.070203.144230
Rizzolatti G, 2008, CURR OPIN NEUROBIOL, V18, P179, DOI
10.1016/j.conb.2008.08.001
Saygin AP, 2004, J NEUROSCI, V24, P6181, DOI 10.1523/JNEUROSCI.0504-04.2004
Saygin AP, 2012, SOC COGN AFFECT NEUR, V7, P413, DOI 10.1093/scan/nsr025
Shimada S, 2006, NEUROIMAGE, V32, P930, DOI 10.1016/j.neuroimage.2006.03.044
Tai YF, 2004, CURR BIOL, V14, P117, DOI 10.1016/j.cub.2004.01.005
Ulloa ER, 2007, BEHAV BRAIN RES, V183, P188, DOI 10.1016/j.bbr.2007.06.007
Urgen BA, 2013, FRONT NEUROROBOTICS, V7, DOI 10.3389/fnbot.2013.00019
Zhao SY, 2006, NEW MEDIA SOC, V8, P401, DOI 10.1177/1461444806061951
NR 38
TC 0
Z9 0
U1 3
U2 3
PU JOURNAL HUMAN-ROBOT INTERACTION
PI STANFORD
PA JOURNAL HUMAN-ROBOT INTERACTION, STANFORD, CA 00000 USA
SN 2163-0364
J9 J HUM-ROBOT INTERACT
JI J. Hum.-Robot Interact.
PY 2016
VL 5
IS 1
BP 68
EP 81
DI 10.5898/JHRI.5.1.Matsuda
PG 14
WC Robotics
SC Robotics
GA FA3BD
UT WOS:000405315100003
DA 2018-01-22
ER

PT J
AU Cisneros, R
Nakaoka, S
Morisawa, M
Kaneko, K
Kajita, S
Sakaguchi, T
Kanehiro, F
AF Cisneros, Rafael
Nakaoka, Shin'ichiro
Morisawa, Mitsuharu
Kaneko, Kenji
Kajita, Shuuji
Sakaguchi, Takeshi
Kanehiro, Fumio
TI Effective teleoperated manipulation for humanoid robots in partially
unknown real environments: team AIST-NEDO's approach for performing the
Plug Task during the DRC Finals
SO ADVANCED ROBOTICS
LA English
DT Article
DE Humanoid Robots; DARPA Robotics Challenge; Teleoperation system; Plug
Task
AB This paper introduces the approach used by AIST-NEDO team during the DARPA
Robotics Challenge (DRC) Finals to perform the Plug Task, which is taken as a case
study to show how we dealt, in general, with the manipulation tasks proposed by
this challenge, using similar strategies. For that purpose, this paper also
describes the system used to teleoperate our humanoid robot, HRP-2Kai, in a
disaster-hit scenario that is partially unknown, inherently non-structured and
where communications are not reliable. This description includes an overview of our
robotic platform, as well as the overall structure of the teleoperation interface
and the algorithms behind. Also, the way the Plug Task was performed using these
algorithms as operation units is explained in detail, by means of a high-level
description. Finally, this paper presents the results for this task obtained during
the actual competition, as well as a comparison with the performance achieved by
other teams that performed the Plug Task in the DRC Finals.
C1 [Cisneros, Rafael; Nakaoka, Shin'ichiro; Morisawa, Mitsuharu; Kaneko, Kenji;
Kajita, Shuuji; Sakaguchi, Takeshi; Kanehiro, Fumio] Natl Inst Adv Ind Sci &
Technol, Tsukuba, Ibaraki, Japan.
RP Cisneros, R (reprint author), Natl Inst Adv Ind Sci & Technol, Tsukuba, Ibaraki,
Japan.
EM rafael.cisneros@aist.go.jp
RI Nakaoka, Shin'ichiro/M-5396-2016; Morisawa, Mitsuharu/M-3327-2016;
Kanehiro, Fumio/L-8660-2016; KANEKO, Kenji/M-5360-2016; Kajita,
Shuuji/M-5010-2016; Cisneros Limon, Rafael/M-5363-2016; Sakaguchi,
Takeshi/M-4024-2016
OI Nakaoka, Shin'ichiro/0000-0002-2346-1251; Morisawa,
Mitsuharu/0000-0003-0056-4335; Kanehiro, Fumio/0000-0002-0277-3467;
KANEKO, Kenji/0000-0002-1888-8787; Kajita, Shuuji/0000-0001-8188-2209;
Cisneros Limon, Rafael/0000-0002-0850-070X; Sakaguchi,
Takeshi/0000-0002-2726-7448
FU NEDO
FX This research was partially supported by the NEDO's 'The International
R&D and Demonstration Project in Environment and Medical Device
Sector/The International R&D and Demonstration Project on Robot
Field/R&D Project on Disaster Response Robots' and 'Development of a
highly dependable humanoid robot system that can work in unstructured
environments' project.
CR Ackerman E, 2015, DARPA ROBOTICS CHALL
American Defense Advanced Research Projects Agency (DARPA), 2015, DARPA ROB
CHALL FIN
American Defense Advanced Research Projects Agency (DARPA), 2015, DRC FIN RUL
BOOK
Cisneros R, 2015, 33 ANN C ROB SOC JAP
Cisneros R, 2015, IEEE-RAS INT C HUMAN, P1102, DOI
10.1109/HUMANOIDS.2015.7363506
Fallon M., 2014, TECHNICAL REPORT
Hornung A, 2013, AUTON ROBOT, V34, P189, DOI 10.1007/s10514-012-9321-0
Kanehiro F, 2012, IEEE INT C INT ROBOT, P1911, DOI 10.1109/IROS.2012.6385587
Kaneko K, 2015, IEEE-RAS INT C HUMAN, P132, DOI 10.1109/HUMANOIDS.2015.7363526
Kanoun O, 2011, IEEE T ROBOT, V27, P785, DOI 10.1109/TRO.2011.2142450
Krut S, 2010, IEEE T ROBOT, V26, P853, DOI 10.1109/TRO.2010.2060830
Morisawa M., 2012, IEEE RAS INT C HUM R, P734
Nakaoka S., 2007, IEEE RSJ INT C INT R, P3641
Nakaoka S, 2015, IEEE-RAS INT C HUMAN, P895, DOI 10.1109/HUMANOIDS.2015.7363467
Nakaoka S, 2014, 2014 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION
(SII), P590, DOI 10.1109/SII.2014.7028105
Nakaoka S, 2012, 2012 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION
(SII), P79, DOI 10.1109/SII.2012.6427350
Phillips-Grafflin C, 2014, INTEL SERV ROBOT, V7, P121, DOI 10.1007/s11370-014-
0156-8
Quinlan S., 1994, IEEE INT C ROB AUT I
Romay A., 2014, IEEE RAS INT C HUM R, P979
Rusu R. B., 2011, IEEE INT C ROB AUT I
Strickland Melissa, 2011, EXPLAINER WHAT WENT
Tokyo Electric Power Plant, 2016, DEC PLAN FUK DAIICH
NR 22
TC 0
Z9 0
U1 0
U2 1
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2016
VL 30
IS 24
BP 1544
EP 1558
DI 10.1080/01691864.2016.1250674
PG 15
WC Robotics
SC Robotics
GA EE5LH
UT WOS:000389648500003
DA 2018-01-22
ER

PT J
AU Basoeki, F
DallaLibera, F
Ishiguro, H
AF Basoeki, Fransiska
DallaLibera, Fabio
Ishiguro, Hiroshi
TI On the human perception of dissimilarities between postures of humanoids
SO ADVANCED ROBOTICS
LA English
DT Article
DE Humanoid robots; posture distance; human perception
ID ACTION RECOGNITION; SIMILARITY; ROBOT
AB This paper analyzes the perceived dissimilarity between postures of humanoid
robots. First, the human perception of absolute distance between postures is
focused. It is shown that results derived in computer graphics for human figures
can be replicated with humanoids, despite the difference in body proportions and
arrangement of the degrees of freedom. Successively, the perception of relative
distances between postures is considered. It is shown that, paradoxically, better
prediction of the distance between a pair of postures does not necessary lead to
better predictions of which of multiple distances is the shortest. Finally, the
paper concludes by briefly discussing possible implications of this finding to the
fields of motion retrieval and motion blending.
C1 [Basoeki, Fransiska; DallaLibera, Fabio; Ishiguro, Hiroshi] Osaka Univ, Grad Sch
Engn Sci, Dept Syst Innovat, Toyonaka, Osaka, Japan.
RP Basoeki, F (reprint author), Osaka Univ, Grad Sch Engn Sci, Dept Syst Innovat,
Toyonaka, Osaka, Japan.
EM fransiska.basoeki@irl.sys.es.osaka-u.ac.jp
FU JSPS [A2647330, 26880014]
FX This work was supported by the JSPS Research Fellowship for Young
Scientists under [grant number A2647330]; JSPS Research Activity
Start-up Grant-in-Aid for Scientific Research under [grant number
26880014].
CR Argall BD, 2009, ROBOT AUTON SYST, V57, P469, DOI 10.1016/j.robot.2008.10.024
Basoeki F, 2015, INT J SOC ROBOT, V7, P743, DOI 10.1007/s12369-015-0318-7
Beck D., 1992, THESIS
Candamo J, 2010, IEEE T INTELL TRANSP, V11, P206, DOI 10.1109/TITS.2009.2030963
Chen C, 2011, IEEE T VIS COMPUT GR, V17, P1676, DOI 10.1109/TVCG.2010.272
Chen C, 2011, COMPUT VIS IMAGE UND, V115, P290, DOI 10.1016/j.cviu.2010.11.007
Chen C, 2009, COMPUT ANIMAT VIRT W, V20, P267, DOI 10.1002/cav.297
Choensawat W., 2012, J ADV COMPUTATIONAL, V16, P13
Ek CH, 2008, LECT NOTES COMPUT SC, V4892, P132
Griffin H, 2015, IEEE T AFFECT COMPUT, V6, P1
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Kliper-Gross O, 2011, LECT NOTES COMPUT SC, V7005, P31, DOI 10.1007/978-3-642-
24471-1_3
Kovar L, 2004, ACM T GRAPHIC, V23, P559, DOI 10.1145/1015706.1015760
Kruger V, 2007, ADV ROBOTICS, V21, P1473
Laban Rudolf, 1971, MASTERY MOVEMENT
Lallee S, 2012, IEEE T AUTON MENT DE, V4, P239, DOI 10.1109/TAMD.2012.2199754
Laming D, 2003, CENGAGE LEARNING EME
Lee D, 2010, INT J ROBOT RES, V29, P60, DOI 10.1177/0278364909342282
Lei J, 2015, SIGNAL PROCESS, V108, P136, DOI 10.1016/j.sigpro.2014.08.030
Liu YL, 1997, ERGONOMICS, V40, P818, DOI 10.1080/001401397187810
Lourens T, 2010, ROBOT AUTON SYST, V58, P1256, DOI 10.1016/j.robot.2010.08.006
Masterson J, 2008, J CHILD LANG, V35, P373, DOI 10.1017/S0305000907008549
Munaro M, 2013, BIOL INSPIR COGN ARC, V5, P42, DOI 10.1016/j.bica.2013.05.008
Poppe R, 2010, IMAGE VISION COMPUT, V28, P976, DOI 10.1016/j.imavis.2009.11.014
Pot E, 2009, 18 IEEE INT S ROB HU, V2009, P46
Qu Y, 2012, ERGONOMICS, V55, P885, DOI 10.1080/00140139.2012.682165
Rett J, 2009, INT J REASONING BASE, V2, P13
Saldien J, 2014, INT J ADV ROBOT SYST, V11, DOI 10.5772/58402
Seymour B, 2008, CURR OPIN NEUROBIOL, V18, P173, DOI 10.1016/j.conb.2008.07.010
Shon A, 2005, ADV NEURAL INFORM PR, V18, P1233
Shotton J, 2013, COMMUN ACM, V56, P116, DOI 10.1145/2398356.2398381
Tang JKT, 2008, COMPUT ANIMAT VIRT W, V19, P211, DOI 10.1002/cav.260
Tran D, 2008, LECT NOTES COMPUT SC, V5302, P548, DOI 10.1007/978-3-540-88682-
2_42
Vogt D, 2014, J VIRTUAL REALITY BR, V11
Yoo I, 2014, VISUAL COMPUT, V30, P213, DOI 10.1007/s00371-013-0797-1
Zatsiorsky V. M., 2012, BIOMECHANICS SKELETA
NR 36
TC 1
Z9 1
U1 0
U2 0
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2016
VL 30
IS 21
BP 1395
EP 1405
DI 10.1080/01691864.2016.1216724
PG 11
WC Robotics
SC Robotics
GA DX2QD
UT WOS:000384214700003
DA 2018-01-22
ER

PT J
AU Tajitsu, Y
AF Tajitsu, Y.
TI Smart piezoelectric fabric and its application to control of humanoid
robot
SO FERROELECTRICS
LA English
DT Article
DE Piezoelectric polymer; fiber; chiral polymer; poly-l-lactic acid;
sensor; fabric
AB We have developed a piezoelectric fabric for sensing applied stress and strain
using piezoelectric poly-l-lactic acid (PLLA) fibers, and tailored smart clothing
having the function of sensing complex human motion using the piezoelectric PLLA
fabric. To provide a high-quality PLLA fabric with large piezoelectricity,
simultaneously satisfying the characteristics of softness, stretchability, and
wearability, we first attempted the to improve the balance of the physical
properties of PLLA fibers. We investigated the use of an acrylic copolymer as a
plastic additive for improving the piezoelectric and mechanical properties of PLLA
fibers, since the acrylic copolymer has extremely high dispersibility and
compatibility, and the optimized furthermore the PLLA fiber fabrication conditions
to improve the physical properties. To realize a piezoelectric fabric with the
ability of sensing complex human motion, plain, twill, and satin weaves with
improved the piezoelectricity, softness, stretchability, and wearability were
produced using the PLLA fibers by Teijin Limited, Japan, a joint research company.
Moreover, we developed a prototype system that allows human movement, detected
through the motion sensing of smart clothing obtained by sewing together pieces of
piezoelectric fabric, to be linked with that of a humanoid robot. At present,
simple procedures such as bending of the arms and twisting of the wrists can be
replicated by the humanoid robot but not complex movements of the arms.
C1 [Tajitsu, Y.] Kansai Univ, Grad Sch Sci & Engn, Dept Elect Engn, Suita, Osaka,
Japan.
RP Tajitsu, Y (reprint author), Kansai Univ, Grad Sch Sci & Engn, Dept Elect Engn,
Suita, Osaka, Japan.
EM tajitsu@kansai-u.ac.jp
FU Ministry of Education, Culture, Sports, Science and Technology of Japan
[24655108, 15K13714]
FX We would like to thank Teijin Co., Ltd., for kindly preparing the
piezoelectric PLLA fabrics and smart clothing. This work was also
supported in part by Grants-in-Aid for Scientific Research (Nos.
24655108 and 15K13714) from the Ministry of Education, Culture, Sports,
Science and Technology of Japan. We express our deep gratitude for
receiving the prize for Science and Technology in the Commendation for
Science and Technology (2016) by the Minister of Education, Culture,
Sports, Science and Technology.
CR Ando M., 2013, INT DISPL WORKSH IDW
Asaka K., 2014, SOFT ACTUATORS MAT M
Carpi F, 2005, IEEE T INF TECHNOL B, V9, P295, DOI 10.1109/TITb.2005.854514
Carpi F., 2009, BIOMEDICAL APPL ELEC
Chawla K., 1998, FIBROUS MAT
Edmison J, 2002, SIXTH INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS,
PROCEEDINGS, P41, DOI 10.1109/ISWC.2002.1167217
Fukada E, 2000, IEEE T ULTRASON FERR, V47, P1277, DOI 10.1109/58.883516
Galetti P. M., 1988, MED APPL PIEZOELECTR
Grossman P., 2003, P INT WORKSH NEW GEN, P73
Honda M, 2007, JPN J APPL PHYS 1, V46, P7122, DOI 10.1143/JJAP.46.71221
Ito S, 2012, JPN J APPL PHYS, V51, DOI 10.1143/JJAP.51.09LD16
Nalwa H. S., 1995, FERROELECTRIC POLYM
Ochiai T, 1998, JPN J APPL PHYS 1, V37, P3374, DOI 10.1143/JJAP.37.3374
Shiomi Y, 2013, JPN J APPL PHYS, V52, DOI 10.7567/JJAP.52.09KE02
Tajitsu Y, 2004, FERROELECTRICS, V304, P1025, DOI 10.1080/00150190490460885
Tajitsu Y, 2010, IEEE T DIELECT EL IN, V17, P1050, DOI 10.1109/TDEI.2010.5539674
Tajitsu Y., 2012, 2012 INT C EL ACT PO
Tajitsu Y., 2008, IEEE T ULTRASON FERR, V55, P1277
Tajitsu Y, 2013, IEEE T ULTRASON FERR, V60, P1625, DOI 10.1109/TUFFC.2013.2744
Wang T, 1988, APPL FERROELECTRIC P
Weber J.-L., 2003, P INT WORKSH NEW GEN, P169
Wilhelm F. H., LIFESHIRT ADV SYSTEM
NR 22
TC 2
Z9 2
U1 5
U2 18
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0015-0193
EI 1563-5112
J9 FERROELECTRICS
JI Ferroelectrics
PY 2016
VL 499
IS 1
SI SI
BP 36
EP 46
DI 10.1080/00150193.2016.1171982
PN 3
PG 11
WC Materials Science, Multidisciplinary; Physics, Condensed Matter
SC Materials Science; Physics
GA DQ5LE
UT WOS:000379245900005
DA 2018-01-22
ER

PT J
AU Nakano, YI
Yoshino, T
Yatsushiro, M
Takase, Y
AF Nakano, Yukiko I.
Yoshino, Takashi
Yatsushiro, Misato
Takase, Yutaka
TI Generating Robot Gaze on the Basis of Participation Roles and Dominance
Estimation in Multiparty Interaction
SO ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS
LA English
DT Article
DE Design; Experimentation; Human Factors; Gaze generation; conversational
dominance; participation role; humanoid robot
ID SPEAKING; CONVERSATIONS; METAANALYSIS
AB Gaze is an important nonverbal feedback signal in multiparty face-to-face
conversations. It is well known that gaze behaviors differ depending on
participation role: speaker, addressee, or side participant. In this study, we
focus on dominance as another factor that affects gaze. First, we conducted an
empirical study and analyzed its results that showed how gaze behaviors are
affected by both dominance and participation roles. Then, using speech and gaze
information that was statistically significant for distinguishing the more dominant
and less dominant person in an empirical study, we established a regression-based
model for estimating conversational dominance. On the basis of the model, we
implemented a dominance estimation mechanism that processes online speech and head
direction data. Then we applied our findings to human-robot interaction. To design
robot gaze behaviors, we analyzed gaze transitions with respect to participation
roles and dominance and implemented gaze-transition models as robot gaze behavior
generation rules. Finally, we evaluated a humanoid robot that has dominance
estimation functionality and determines its gaze based on the gaze models, and we
found that dominant participants had a better impression of less dominant robot
gaze behaviors. This suggests that a robot using our gaze models was preferred to a
robot that was simply looking at the speaker. We have demonstrated the importance
of considering dominance in human-robot multiparty interaction.
C1 [Nakano, Yukiko I.; Yoshino, Takashi; Yatsushiro, Misato; Takase, Yutaka] Seikei
Univ, 3-3-1 Kichijoji Kitamachi, Musashino, Tokyo 1808633, Japan.
RP Nakano, YI; Takase, Y (reprint author), Seikei Univ, 3-3-1 Kichijoji Kitamachi,
Musashino, Tokyo 1808633, Japan.
EM dm146213@cc.seikei.ac.jp; yutaka-takase@st.seikei.ac.jp
CR Al Moubayed Samer, 2012, ACM T INTERACTIVE IN, V1, P1
Al Moubayed S., 2012, LECT NOTES COMPUT SC, V7403, P114, DOI DOI 10.1007/978-3-
642-34584-5
Argyle M., 1976, GAZE MUTUAL GAZE
Argyle Michael, 1977, J ENV PSYCHOL NONVER, V1, P6
Bales R. F., 1950, INTERACTION PROCESS
Nikolaus Bee, 2009, P AFF COMP INT INT W
Bernieri FJ, 1996, J PERS SOC PSYCHOL, V71, P110, DOI 10.1037/0022-3514.71.1.110
Bohus Dan, 2011, P SIGDIAL 2011 C POR, P98
Bohus Dan, 2010, P INT C MULT INT WOR
Bohus Dan, 2009, P INT C MULT INT WOR
Chen Lei, 2009, P 11 INT C MULT INT
Clark H. H., 1996, USING LANGUAGE
Jayagopi Dinesh Babu, 2013, P HUM ROB INT HRI 13
Dovidio J. F., 1985, POWER DOMINANCE NONV, P129
DUNCAN S, 1972, J PERS SOC PSYCHOL, V23, P283, DOI 10.1037/h0033031
Duncan S. D., 1974, LANG SOC, V3, P161, DOI [10.1017/S0047404500004322, DOI
10.1017/S0047404500004322]
Escalera Sergio, 2010, EURASIP J ADV SIGNAL
Gatica-Perez D, 2009, IMAGE VISION COMPUT, V27, P1775, DOI
10.1016/j.imavis.2009.01.004
GOETSCH GG, 1980, SOC PSYCHOL QUART, V43, P173, DOI 10.2307/3033620
Goffman E., 1981, FORMS TALK
Hall JA, 2005, PSYCHOL BULL, V131, P898, DOI 10.1037/0033-2909.131.6.898
Hollingshead A. B., 1975, 4 FACTOR INDEX SOCIA
Huang Hung-Hsuan, 2011, P 13 INT C MULT INT, P401
Hung Hayley, 2008, P 10 INT C MULT INT, P233
Jayagopi DB, 2009, IEEE T AUDIO SPEECH, V17, P501, DOI 10.1109/TASL.2008.2008238
Katzenmaier M., 2004, P 6 INT C MULT INT, P144
KENDON A, 1967, ACTA PSYCHOL, V26, P22, DOI 10.1016/0001-6918(67)90005-4
Knapp M. L., 2010, NONVERBAL COMMUNICAT
Lance B, 2010, AUTON AGENT MULTI-AG, V20, P50, DOI 10.1007/s10458-009-9097-6
Lee A, 2001, P EUR C SPEECH COMM, P1691
Mast MS, 2002, HUM COMMUN RES, V28, P420, DOI 10.1111/j.1468-2958.2002.tb00814.x
Mutlu B., 2009, P 4 ACM IEEE INT C H, P61, DOI DOI 10.1145/1514095.1514109
Otsuka Kazuhiro, 2006, P CHI 06 HUM FACT CO
Rienks R., 2006, P INT C MULT INT, p257
Rienks Rutger, 2005, P 2 JOINT WORKSH MUL
Sanchez-Cortes D, 2013, J MULTIMODAL USER IN, V7, P39, DOI 10.1007/s12193-012-
0101-0
Sanchez-Cortes Dairazalia, 2010, P INT C MULT INT WOR
Sheikhi Samira, 2013, P 6 WORKSH EYE GAZ I
Terken J, 2007, ICMI'07: PROCEEDINGS OF THE NINTH INTERNATIONAL CONFERENCE ON
MULTIMODAL INTERFACES, P94
Tusing KJ, 2000, HUM COMMUN RES, V26, P148, DOI 10.1093/hcr/26.1.148
Vertegaal R, 2001, P SIGCHI C HUM FACT, P301
Vertegaal Roel, 1999, P SIGCHI C HUM FACT, P294
Yamazaki A, 2008, CHI 2008: 26TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN
COMPUTING SYSTEMS VOLS 1 AND 2, CONFERENCE PROCEEDINGS, P131
NR 43
TC 0
Z9 0
U1 0
U2 0
PU ASSOC COMPUTING MACHINERY
PI NEW YORK
PA 2 PENN PLAZA, STE 701, NEW YORK, NY 10121-0701 USA
SN 2160-6455
EI 2160-6463
J9 ACM T INTERACT INTEL
JI ACM Trans. Interact. Intell. Syst.
PD JAN
PY 2016
VL 5
IS 4
SI SI
AR 22
DI 10.1145/2743028
PG 23
WC Computer Science, Artificial Intelligence
SC Computer Science
GA DJ0RJ
UT WOS:000373911800005
DA 2018-01-22
ER

PT J
AU Nierhoff, T
Hirche, S
Nakamura, Y
AF Nierhoff, Thomas
Hirche, Sandra
Nakamura, Yoshihiko
TI Spatial adaption of robot trajectories based on laplacian trajectory
editing
SO AUTONOMOUS ROBOTS
LA English
DT Article
DE Robotics; Trajectory adaption; Trajectory retargeting; Trajectory
similarity; Local trajectory properties; Multiresolution approach;
Obstacle avoidance
ID MOTION; OPTIMIZATION; FRAMEWORK
AB Assuming that a robot trajectory is given from a high-level planning or learning
mechanism, it needs to be adapted to react to dynamic environment changes. In this
article we propose a novel approach to deform trajectories while keeping their
local shape similar, which is based on the discrete Laplace-Beltrami operator. The
approach can be readily extended and covers multiple deformation techniques
including fixed waypoints that must be passed, positional constraints for collision
avoidance or a cooperative manipulation scheme for the coordination of multiple
robots. Due to its low computational complexity it allows for real-time trajectory
deformation both on local and global scale and online adaptation to changed
environmental constraints. Simulations illustrate the straightforward combination
of the proposed approach with other established trajectory-related methods like
artificial potential fields or prioritized inverse kinematics. Experiments with the
HRP-4 humanoid successfully demonstrate the applicability in complex daily-life
tasks.
C1 [Nierhoff, Thomas; Hirche, Sandra] Tech Univ Munich, Chair Informat Oriented
Control, D-80290 Munich, Germany.
[Nakamura, Yoshihiko] Univ Tokyo, Dept Mechano Informat, Tokyo, Japan.
RP Nierhoff, T (reprint author), Tech Univ Munich, Chair Informat Oriented Control,
D-80290 Munich, Germany.
EM tn@tum.de; hirche@tum.de; nakamura@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102
FU EU Horizon2020 project RAMCIP [643433]; Ministry of Education, Science,
Sports and Culture [20220001]
FX This work was partially supported by the EU Horizon2020 project RAMCIP,
under Grant Agreement No. 643433 and by the Ministry of Education,
Science, Sports and Culture, Grant-in-Aid for Scientific Research (S),
2008-2012, 20220001, "Establishing Human-Machine Communication through
Kinesiology and Linguistics Integration" (PI: Y. Nakamura).
CR ARUN KS, 1987, IEEE T PATTERN ANAL, V9, P699, DOI 10.1109/TPAMI.1987.4767965
Billard A., 2008, SPRINGER HDB ROBOTIC, P1371
Botsch M, 2004, ACM T GRAPHIC, V23, P630, DOI 10.1145/1015706.1015772
Botsch M., 2004, ACM SPECIAL INTEREST, P185
Brock O, 2002, INT J ROBOT RES, V21, P1031, DOI 10.1177/0278364902021012002
Desbrun M, 1999, COMP GRAPH, P317
Dierkes U., 2010, MINIMAL SURFACES, P339
do Carmo Manfredo Perdigao, 1976, DIFFERENTIAL GEOMETR
Eck M., 1995, ANN C SERIES, P173
FLASH T, 1985, J NEUROSCI, V5, P1688
Hilario L, 2011, IEEE INT C INT ROBOT, P1567, DOI 10.1109/IROS.2011.6048168
Hoffmann H., 2009, ICRA, P2587, DOI DOI 10.1109/R0B0T.2009.5152423
Karaman S, 2011, INT J ROBOT RES, V30, P846, DOI 10.1177/0278364911406761
Karni Z, 2000, COMP GRAPH, P279
Kilner J, 2007, SOC NEUROSCI-UK, V2, P158, DOI 10.1080/17470910701428190
Kobbelt LP, 2000, COMPUT GRAPH FORUM, V19, pC249
Kupferberg A, 2011, AI SOC, V26, P339, DOI 10.1007/s00146-010-0314-2
LAVALLE SM, 1998, TECHNICAL REPORT, P98
Lee D, 2011, AUTON ROBOT, V31, P115, DOI 10.1007/s10514-011-9234-3
Levine S., 2012, INT C MACH LEARN
Levy B., 2006, IEEE INT C SHAP MOD, P13, DOI DOI 10.1109/SMI.2006.21
Lipman Y., 2005, International Journal of Shape Modeling, V11, P43, DOI
10.1142/S0218654305000724
Lipman Y., 2004, SMI 2004, P181
Meyer M, 2002, VISMATH, P35
Mombaur K., 2013, MODELING SIMULATION, V18, P165
Nakamura Y, 1991, ADV ROBOTICS REDUNDA
Nierhoff T., 2013, P ANN C RSJ
Nierhoff T., 2014, IEEE RSJ INT C INT R
Nierhoff T., 2012, IEEE INT C CONTR AUT
Nierhoff T., 2013, DGR TAGE
Pastor P, 2009, IEEE INT CONF ROBOT, P1293
Pham Q.C., 2013, INT JOINT C ART INT
Pham Q.-C., 2011, ROBOTICS SCI SYSTEMS
Quinlan S., 1993, IEEE INT C ROB AUT, P802
Reuter M, 2009, COMPUT GRAPH-UK, V33, P381, DOI 10.1016/j.cag.2009.03.005
Schaal S, 2006, ADAPTIVE MOTION OF ANIMALS AND MACHINES, P261, DOI 10.1007/4-
431-31381-8_23
Schulman J, 2014, INT J ROBOT RES, V33, P1251, DOI 10.1177/0278364914528132
Shoemake K., 1985, ACM SIGGRAPH COMPUTE, V19, P245, DOI [DOI
10.1145/325165.325242, 10.1145/325334.325242]
Sorkine O., 2004, SHAPE MODELING INT, P191
Sorkine O, 2007, EUR S GEOM PROC, P109
STRICHARTZ RS, 1983, J FUNCT ANAL, V52, P48, DOI 10.1016/0022-1236(83)90090-3
Taubin G., 1995, ACM SIGGRAPH, P351
Umeyama S, 1991, PAMI, V13, P376, DOI DOI 10.1109/34.88573
von Luxburg U, 2007, STAT COMPUT, V17, P395, DOI 10.1007/s11222-007-9033-z
Wardetzky M., 2007, EUR S GEOM PROC, P33
Yamane K, 2003, IEEE T VIS COMPUT GR, V9, P352, DOI 10.1109/TVCG.2003.1207443
Zhang H, 2010, COMPUT GRAPH FORUM, V29, P1865, DOI 10.1111/j.1467-
8659.2010.01655.x
Zhou K, 2005, ACM T GRAPHIC, V24, P496, DOI 10.1145/1073204.1073219
Zucker M, 2013, INT J ROBOT RES, V32, P1164, DOI 10.1177/0278364913488805
NR 49
TC 0
Z9 0
U1 3
U2 11
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 0929-5593
EI 1573-7527
J9 AUTON ROBOT
JI Auton. Robot.
PD JAN
PY 2016
VL 40
IS 1
BP 159
EP 173
DI 10.1007/s10514-015-9442-3
PG 15
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA DB8YF
UT WOS:000368802400010
DA 2018-01-22
ER

PT J
AU Takano, W
Nakamura, Y
AF Takano, Wataru
Nakamura, Yoshihiko
TI Real-time Unsupervised Segmentation of human whole-body motion and its
application to humanoid robot acquisition of motion symbols
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Motion segmentation; Motion primitive; Competitive learning
ID SPEECH SEGMENTATION; WORD DISCOVERY; IMITATION
AB An interactive loop between motion recognition and motion generation is a
fundamental mechanism for humans and humanoid robots. We have been developing an
intelligent framework for motion recognition and generation based on symbolizing
motion primitives. The motion primitives are encoded into Hidden Markov Models
(HMMs), which we call "motion symbols". However, to determine the motion primitives
to use as training data for the HMMs, this framework requires a manual segmentation
of human motions. Essentially, a humanoid robot is expected to participate in daily
life and must learn many motion symbols to adapt to various situations. For this
use, manual segmentation is cumbersome and impractical for humanoid robots. In this
study, we propose a novel approach to segmentation, the Real-time Unsupervised
Segmentation (RUS) method, which comprises three phases. In the first phase, short
human movements are encoded into feature HMMs. Seamless human motion can be
converted to a sequence of these feature HMMs. In the second phase, the causality
between the feature HMMs is extracted. The causality data make it possible to
predict movement from observation. In the third phase, movements having a large
prediction uncertainty are designated as the boundaries of motion primitives. In
this way, human whole-body motion can be segmented into a sequence of motion
primitives. This paper also describes an application of RUS to AUtonomous
Symbolization of motion primitives (AUS). Each derived motion primitive is
classified into an HMM for a motion symbol, and parameters of the HMMs are
optimized by using the motion primitives as training data in competitive learning.
The HMMs are gradually optimized in such a way that the HMMs can abstract similar
motion primitives. We tested the RUS and AUS frameworks on captured human whole-
body motions and demonstrated the validity of the proposed framework. (C) 2015 The
Authors. Published by Elsevier B.V. This is an open access article under the CC BY
license (http://creativecommons.org/licenses/by/4.0/).
C1 [Takano, Wataru; Nakamura, Yoshihiko] Univ Tokyo, Mechanoinformat, Bunkyo Ku,
Tokyo 1138656, Japan.
RP Takano, W (reprint author), Univ Tokyo, Mechanoinformat, Bunkyo Ku, 7-3-1 Hongo,
Tokyo 1138656, Japan.
EM takano@ynl.t.u-tokyo.ac.jp; nakamura@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102
FU Japan Society for the Promotion of Science [26700021]; Strategic
Information and Communications R&D Promotion Program of the Ministry of
Internal Affairs and Communications [142103011]
FX This research was partially supported by a Grant-in-Aid for Young
Scientists (A) (No. 26700021) from the Japan Society for the Promotion
of Science, and by the Strategic Information and Communications R&D
Promotion Program (No. 142103011) of the Ministry of Internal Affairs
and Communications.
CR Billard AG, 2006, ROBOT AUTON SYST, V54, P370, DOI 10.1016/j.robot.2006.01.007
Brent MR, 1999, TRENDS COGN SCI, V3, P294, DOI 10.1016/S1364-6613(99)01350-9
Brent MR, 1999, MACH LEARN, V34, P71, DOI 10.1023/A:1007541817488
Cairns P, 1997, COGNITIVE PSYCHOL, V33, P111, DOI 10.1006/cogp.1997.0649
Calinon S, 2005, P 22 INT C MACH LEAR, P105, DOI DOI 10.1145/1102351.1102365
Chiappa S., 2008, ADV NEURAL INFORM PR, V21, P297
Christiansen MH, 1999, TRENDS COGN SCI, V3, P289, DOI 10.1016/S1364-
6613(99)01356-X
Donald M., 2011, ORIGIN MODERN MIND
ELMAN JL, 1990, COGNITIVE SCI, V14, P179, DOI 10.1016/0364-0213(90)90002-E
Gallese V, 1998, TRENDS COGN SCI, V2, P493, DOI 10.1016/S1364-6613(98)01262-5
Grave K, 2012, IEEE INT C INT ROBOT, P751, DOI 10.1109/IROS.2012.6386116
Haruno M, 2001, NEURAL COMPUT, V13, P2201, DOI 10.1162/089976601750541778
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Janus B., 2005, P IEEE INT C ADV ROB, P411
Kim TH, 2003, ACM T GRAPHIC, V22, P392, DOI 10.1145/882262.882283
Kohlmorgen J., 2001, ADV NEURAL INFORM PR, V14
Kulic D, 2009, IEEE T ROBOT, V25, P1158, DOI 10.1109/TRO.2009.2026508
Kuniyoshi Y., 1994, ROBOTICS AUTOMATION, V10
Mataric MJ, 2000, IEEE INTELL SYST APP, V15, P18, DOI 10.1109/5254.867908
Mori T., 2004, P INT WORKSH MAN MAS, P207
Morimoto J, 1999, ADV ROBOTICS, V13, P267, DOI 10.1163/156855399X01314
Rabiner L., 1993, PRENTICE HALL SIGNAL
Redington M, 1997, TRENDS COGN SCI, V1, P273, DOI 10.1016/S1364-6613(97)01081-4
Rizzolatti G, 2001, NAT REV NEUROSCI, V2, P661, DOI 10.1038/35090060
Shiratori T., 2004, P 6 IEEE INT C AUT F, V5, P857
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
Wang T.S., 2011, P 2 IEEE PAC RIM C M, V10, P174
Yamane K, 2003, IEEE T VIS COMPUT GR, V9, P352, DOI 10.1109/TVCG.2003.1207443
NR 28
TC 5
Z9 5
U1 0
U2 9
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD JAN
PY 2016
VL 75
BP 260
EP 272
DI 10.1016/j.robot.2015.09.021
PN B
PG 13
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA DA4IK
UT WOS:000367763400011
OA gold
DA 2018-01-22
ER

PT J
AU Gams, A
Petric, T
Do, M
Nemec, B
Morimoto, J
Asfour, T
Ude, A
AF Gams, Andrej
Petric, Tadej
Do, Martin
Nemec, Bojan
Morimoto, Jun
Asfour, Tamim
Ude, Ales
TI Adaptation and coaching of periodic motion primitives through physical
and visual interaction
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Dynamic movement primitives; Force control; Coaching; Human-robot
interaction
ID DYNAMIC MOVEMENT PRIMITIVES; IMITATION; MANIPULATION; SKILLS; TASKS
AB In this paper we propose and evaluate a control system to (1) learn and (2)
adapt robot motion for continuous non-rigid contact with the environment. We
present the approach in the context of wiping surfaces with robots. Our approach is
based on learning by demonstration. First an initial periodic motion, covering the
essence of the wiping task, is transferred from a human to a robot. The system
extracts and learns one period of motion. Once the user/demonstrator is content
with the motion, the robot seeks and establishes contact with a given surface,
maintaining a predefined force of contact through force feedback. The shape of the
surface is encoded for the complete period of motion, but the robot can adapt to a
different surface, perturbations or obstacles. The novelty stems from the fact that
the feedforward component is learned and encoded in a dynamic movement primitive.
By using the feedforward component, the feedback component is greatly reduced if
not completely canceled. Finally, if the user is not satisfied with the periodic
pattern, he/she can change parts of motion through predefined gestures or through
physical contact in a manner of a tutor or a coach.
The complete system thus allows not only a transfer of motion, but a transfer of
motion with matching correspondences, i.e. wiping motion is constrained to maintain
physical contact with the surface to be wiped. The interface for both learning and
adaptation is simple and intuitive and allows for fast and reliable knowledge
transfer to the robot.
Simulated and real world results in the application domain of wiping a surface
are presented on three different robotic platforms. Results of the three robotic
platforms, namely a 7 degree-of-freedom Kuka LWR-4 robot, the ARMAR-IIIa humanoid
platform and the Sarcos CB-i humanoid robot, depict different methods of adaptation
to the environment and coaching. (C) 2015 Elsevier B.V. All rights reserved.
C1 [Gams, Andrej; Petric, Tadej; Nemec, Bojan; Ude, Ales] Jozef Stefan Inst, Dept
Automat Biocybernet & Robot, Humanoid & Cognit Robot Lab, Ljubljana 1000, Slovenia.
[Do, Martin; Asfour, Tamim] Karlsruhe Inst Technol, Inst Anthropomat & Robot,
High Performance Humanoid Technol Lab, D-76131 Karlsruhe, Germany.
[Morimoto, Jun; Ude, Ales] ATR Computat Neurosci Labs, Dept Brain Robot
Interface, Kyoto 6190288, Japan.
[Petric, Tadej] Ecole Polytech Fed Lausanne, Biorobot Lab, Stn 14, CH-1015
Lausanne, Vaud, Switzerland.
RP Gams, A (reprint author), Jozef Stefan Inst, Dept Automat Biocybernet & Robot,
Humanoid & Cognit Robot Lab, Jamova Cesta 39, Ljubljana 1000, Slovenia.
EM andrej.gams@ijs.si; tadej.petric@ijs.si; martin.do@kit.edu;
bojan.nemec@ijs.si; xmorimo@atr.jp; asfour@kit.edu; ales.ude@ijs.si
OI , Tadej/0000-0002-3407-4206; Ude, Ales/0000-0003-3677-3972; Gams,
Andrej/0000-0002-9803-3593
FU European Communitys [270273]; NICT, Japan Trust International Research
Cooperation Program; SRPBS from MEXT/AMED; MEXT KAKENHI [23120004];
MIC-SCOPE; JST-SICP; ImPACT Program of Council for Science, Technology
and Innovation
FX This work was supported by the European Communitys Seventh Framework
Programme FP7/2007-2013 (Specific Programme Cooperation, Theme 3,
Information and Communication Technologies) grant agreement no. 270273,
Xperience. Additionally, A.U. was supported by NICT, Japan Trust
International Research Cooperation Program; J.M. was supported by SRPBS
from MEXT/AMED; by MEXT KAKENHI 23120004; by MIC-SCOPE; by JST-SICP; by
ImPACT Program of Council for Science, Technology and Innovation.
CR Abu-Dakka FJ, 2015, AUTON ROBOT, V39, P199, DOI 10.1007/s10514-015-9435-2
Atkeson CG, 1997, ARTIF INTELL REV, V11, P11, DOI 10.1023/A:1006559212014
Calinon S., 2012, IEEE RAS INT C HUM R, P323
Calinon Sylvain, 2007, P ACM IEEE INT C HUM, P255, DOI DOI
10.1145/1228716.1228751
Calinon S, 2010, IEEE ROBOT AUTOM MAG, V17, P44, DOI 10.1109/MRA.2010.936947
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Dillmann R, 2004, ROBOT AUTON SYST, V47, P109, DOI 10.1016/j.robot.2004.03.005
Do M., 2014, IEEE INT C ROB AUT, P1858
Do M., 2011, P 2011 14 EUR C POW, P1
Ernesti J., 2012, IEEE RAS INT C HUM R, P57
Gams A., IEEE T ROBOT, V30
Gams A., 2014, IEEE RAS INT C HUM R, P166
Gams A., 2010, IEEE RAS INT C HUM R, P560
Gams A, 2009, AUTON ROBOT, V27, P3, DOI 10.1007/s10514-009-9118-y
Gribovskaya E, 2011, INT J ROBOT RES, V30, P80, DOI 10.1177/0278364910376251
Gruebler A., 2011, IEEE RAS INT C HUM R, P466
Hoffmann H., 2009, ICRA, P2587, DOI DOI 10.1109/R0B0T.2009.5152423
HOGAN N, 1985, J DYN SYST-T ASME, V107, P1, DOI 10.1115/1.3140702
Ijspeert A., NEURAL COMPUT, V25
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Kalakrishnan M, 2011, IEEE INT C INT ROBOT, P4639, DOI 10.1109/IROS.2011.6048825
Khansari-Zadeh SM, 2010, IEEE INT C INT ROBOT, P2676, DOI
10.1109/IROS.2010.5651259
Kober J, 2012, AUTON ROBOT, V33, P361, DOI 10.1007/s10514-012-9290-3
Koropouli V, 2011, IEEE INT C INT ROBOT, P344, DOI 10.1109/IROS.2011.6048335
Kulvicius T, 2013, ROBOT AUTON SYST, V61, P1450, DOI 10.1016/j.robot.2013.07.009
Kulvicius T, 2012, IEEE T ROBOT, V28, P145, DOI 10.1109/TRO.2011.2163863
Lee D, 2011, AUTON ROBOT, V31, P115, DOI 10.1007/s10514-011-9234-3
Ljung L., 1986, THEORY PRACTICE RECU
Matsubara T, 2011, NEURAL NETWORKS, V24, P493, DOI 10.1016/j.neunet.2011.02.004
Nemec B, 2012, ROBOTICA, V30, P837, DOI 10.1017/S0263574711001056
Ortenzi v., 2014, IEEE RAS INT C HUM R, P407
Pastor P., 2012, IEEE RAS INT C HUM R, P309
Pastor P, 2011, IEEE INT C INT ROBOT, P365, DOI 10.1109/IROS.2011.6048819
Peters J, 2008, NEURAL NETWORKS, V21, P682, DOI 10.1016/j.neunet.2008.02.003
Petric T., 2014, IEEE INT C ROB AUT I, P4770
Petric T, 2011, INT J ROBOT RES, V30, P1775, DOI 10.1177/0278364911421511
Riley M., 2006, INT C HUM ROB, P567
Rozo L, 2013, 2013 9TH INTERNATIONAL WORKSHOP ON ROBOT MOTION AND CONTROL
(ROMOCO), P227, DOI 10.1109/RoMoCo.2013.6614613
Rozo L, 2013, INTEL SERV ROBOT, V6, P33, DOI 10.1007/s11370-012-0128-9
Sauser EL, 2012, ROBOT AUTON SYST, V60, P55, DOI 10.1016/j.robot.2011.08.012
Stulp F, 2012, IEEE T ROBOT, V28, P1360, DOI 10.1109/TRO.2012.2210294
Tamosiunaite M, 2011, ROBOT AUTON SYST, V59, P910, DOI
10.1016/j.robot.2011.07.004
Ude A, 2004, ROBOT AUTON SYST, V47, P93, DOI 10.1016/j.robot.2004.03.004
Ude A, 2010, IEEE T ROBOT, V26, P800, DOI 10.1109/TRO.2010.2065430
Villani L., 2008, HDB ROBOTICS, P161
Wada Y, 2004, NEURAL NETWORKS, V17, P353, DOI 10.1016/j.neunet.2003.11.009
NR 46
TC 6
Z9 6
U1 1
U2 7
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD JAN
PY 2016
VL 75
BP 340
EP 351
DI 10.1016/j.robot.2015.09.011
PN B
PG 12
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA DA4IK
UT WOS:000367763400017
OA gold
DA 2018-01-22
ER

PT J
AU Shiomi, M
Hagita, N
AF Shiomi, Masahiro
Hagita, Norihiro
TI Finding a person with a wearable acceleration sensor using a 3D position
tracking system in daily environments
SO ADVANCED ROBOTICS
LA English
DT Article
DE person identification; acceleration sensor
ID SMALL HUMANOID ROBOT; SURFACE IDENTIFICATION
AB Person identification with accurate position information is essential for
providing location-based services in real environments, such as a shopping mall.
For this purpose, we propose a method that integrates 3D position information from
environmental depth sensors and acceleration data from wearable devices to
anonymously gather the trajectories of people who have wearable devices as well as
others. Our proposed method identifies a person who has a wearable device by
comparing two time series of acceleration data from device and position
information. To do this, we extracted the behaviours of each axis using the changes
of each bit of acceleration data at certain time periods. We evaluated our method
with data collected at a shopping mall and a children's playroom to investigate its
effectiveness and robustness in different environments. Our evaluation results
showed that it achieved an average identification of 85%, which is better than
several alternative methods.
C1 [Shiomi, Masahiro; Hagita, Norihiro] Adv Telecommun Res Inst Int, Intelligent
Robot & Commun Labs, Kyoto, Japan.
RP Shiomi, M (reprint author), Adv Telecommun Res Inst Int, Intelligent Robot &
Commun Labs, Kyoto, Japan.
EM m-shiomi@atr.jp
OI SHIOMI, Masahiro/0000-0003-4338-801X
FU Strategic Information and Communications R&D Promotion Programme
(SCOPE), Ministry of Internal Affairs and Communications [132107010]
FX This research was supported by the Strategic Information and
Communications R&D Promotion Programme (SCOPE), Ministry of Internal
Affairs and Communications [132107010].
CR Brscic D, 2013, IEEE T HUM-MACH SYST, V43, P522, DOI 10.1109/THMS.2013.2283945
Woo Cheol C, 2003, 2003 IEEE C ULTR SYS, P389
Cooney M, 2014, INT J SOC ROBOT, V6, P173, DOI 10.1007/s12369-013-0212-0
Fox D, 2003, IEEE PERVAS COMPUT, V2, P24, DOI 10.1109/MPRV.2003.1228524
Giguere P, 2011, IEEE T ROBOT, V27, P534, DOI 10.1109/TRO.2011.2119910
Ikeda T, 2014, J AMB INTEL HUM COMP, V5, P645, DOI 10.1007/s12652-013-0191-x
Kaplan E.D., 2006, UNDERSTANDING GPS PR
LaMarca, 2005, UBICOMP 2005 P 7 INT, P87
Lao S, 2005, LECT NOTES COMPUTER, P339
Matsumura R, 2015, ADV ROBOTICS, V29, P469, DOI 10.1080/01691864.2014.996601
Matsumura R, 2014, IEEE T HUM-MACH SYST, V44, P169, DOI
10.1109/THMS.2013.2296872
Minami M, 2004, LECT NOTES COMPUT SC, V3205, P347
Morioka K, 2013, INT J SMART SENS INT, V6, P2040
Shigeta O, 2008, IEEE RSJ INT C INTEL, V2008, P3872
Shiomi M, 2014, ADV ROBOTICS, V28, P441, DOI 10.1080/01691864.2013.876932
Ward A, 1997, IEEE PERS COMMUN, V4, P42, DOI 10.1109/98.626982
Winkler T., 2014, ACM COMPUT SURV, V47, P1, DOI DOI 10.1145/2545883
Yang J., 2009, P 1 INT WORKSH INT M, P1
NR 18
TC 2
Z9 2
U1 0
U2 11
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD DEC 2
PY 2015
VL 29
IS 23
BP 1563
EP 1574
DI 10.1080/01691864.2015.1095651
PG 12
WC Robotics
SC Robotics
GA CW9XE
UT WOS:000365350700002
DA 2018-01-22
ER

PT J
AU Matsubara, T
Uchikata, A
Morimoto, J
AF Matsubara, Takamitsu
Uchikata, Akimasa
Morimoto, Jun
TI Spatiotemporal synchronization of biped walking patterns with multiple
external inputs by style-phase adaptation
SO BIOLOGICAL CYBERNETICS
LA English
DT Article
DE Style-phase adaptation; Spatiotemporal synchronization; Biped walking;
Teleoperation
ID POLICY GRADIENT-METHOD; LOCOMOTION CONTROL; HUMANOID ROBOTS; CPG;
OSCILLATORS; PRIMITIVES
AB In this paper, we propose a framework for generating coordinated periodic
movements of robotic systems with multiple external inputs. We developed an
adaptive pattern generator model that is composed of a two-factor observation model
with a style parameter and phase dynamics with a phase variable. The style
parameter controls the spatial patterns of the generated trajectories, and the
phase variable manages its temporal profiles. By exploiting the style-phase
separation in the pattern generation, we can independently design adaptation
schemes for the spatial and temporal profiles of the pattern generator to multiple
external inputs. To validate the effectiveness of our proposed method, we applied
it to a user-exoskeleton model to achieve user-adaptive walking assistance for
which the exoskeleton robot's movements need to be coordinated with the user
walking patterns and environment. As a result, the exoskeleton robot successfully
performed stable biped walking behaviors for walking assistance even when the style
of the observed walking pattern and the period were suddenly changed.
C1 [Matsubara, Takamitsu; Uchikata, Akimasa; Morimoto, Jun] ATR Computat Neurosci
Labs, Dept Brain Robot Interface, Kyoto, Japan.
[Matsubara, Takamitsu; Uchikata, Akimasa] Nara Inst Sci & Technol NAIST, Nara,
Japan.
RP Matsubara, T (reprint author), ATR Computat Neurosci Labs, Dept Brain Robot
Interface, Kyoto, Japan.
EM takam-m@is.naist.jp; xmorimo@atr.jp
FU JSPS KAKENHI [25540085]
FX This study is the result of Development of BMI Technologies for Clinical
Application carried out under SRPBS, MEXT; by MIC-SCOPE; International
Cooperative Program, JST and by JSPS and MIZS: Japan-Slovenia research
Cooperative Program; by MEXT KAKENHI 23120004; and by JSPS KAKENHI Grant
Number 25540085; by ImPACT Program of Council for Science, Technology
and Innovation (Cabinet Office, Government of Japan); by the project
commissioned by the New Energy and Industrial Technology Development
Organization (NEDO).
CR Andre J, 2015, J INTELL ROBOT SYST, V80, P625, DOI 10.1007/s10846-015-0196-0
Brand M, 2000, P 27 ANN C COMP GRAP, P183
Degallier S, 2011, AUTON ROBOT, V31, P155, DOI 10.1007/s10514-011-9235-2
Endo G, 2004, IEEE INT CONF ROBOT, P3036, DOI 10.1109/ROBOT.2004.1307523
Endo G, 2008, INT J ROBOT RES, V27, P213, DOI 10.1177/0278364907084980
Fukuoka Y, 2003, INT J ROBOT RES, V22, P187, DOI 10.1177/0278364903022003004
Gams A, 2009, AUTON ROBOT, V27, P3, DOI 10.1007/s10514-009-9118-y
Hase K, 1998, ANTHROPOL SCI, V106, P327, DOI 10.1537/ase.106.327
Hyon S. -H., 2011, P IEEE RSJ INT C INT, P2715
Kanda T., 2003, P INT JOINT C ART IN, P177
Kotosaka S, 2000, P INT S AD MOT AN MA, P1
Li C, 2014, FRONT NEUROROBOTICS, V8, P1, DOI 10.3389/fnbot.2014.00023
Matos V, 2014, ROBOT AUTON SYST, V62, P1669, DOI 10.1016/j.robot.2014.08.010
Matsubara T, 2006, ROBOT AUTON SYST, V54, P911, DOI 10.1016/j.robot.2006.05.012
Matsubara T, 2013, IEEE T BIO-MED ENG, V60, P2205, DOI 10.1109/TBME.2013.2250502
Matsubara T, 2012, NEURAL NETWORKS, V25, P191, DOI 10.1016/j.neunet.2011.08.008
Matsubara T, 2011, NEURAL NETWORKS, V24, P493, DOI 10.1016/j.neunet.2011.02.004
MATSUOKA K, 1985, BIOL CYBERN, V52, P367, DOI 10.1007/BF00449593
Michael M., 2007, PAR DISTR PROC S 200, P1
Morimoto J, 2008, IEEE T ROBOT, V24, P185, DOI 10.1109/TRO.2008.915457
Morimoto J, 2012, IEEE INT CONF ROBOT, P3909, DOI 10.1109/ICRA.2012.6225236
Nakamura Y, 2007, NEURAL NETWORKS, V20, P723, DOI 10.1016/j.neunet.2007.01.002
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Nassour J, 2014, BIOL CYBERN, V108, P291, DOI 10.1007/s00422-014-0592-8
Rasmussen CE, 2005, ADAPT COMPUT MACH LE, P1
Righetti L, 2006, PHYSICA D, V216, P269, DOI [10.1016/j.physd.2006.02.009,
10.1016/j.phvsd.2006.02.009]
Righetti L, 2006, IEEE INT CONF ROBOT, P1585, DOI 10.1109/ROBOT.2006.1641933
Ronsse R., 2011, P IEEE INT REH ROB I, P1
Ronsse R, 2010, P IEEE RAS-EMBS INT, P668, DOI 10.1109/BIOROB.2010.5628021
Silva P, 2014, ROBOT AUTON SYST, V62, P1531, DOI 10.1016/j.robot.2014.05.008
Strogtz S. H., 1994, NONLINEAR DYNAMICS C
Suzuki K, 2007, ADV ROBOTICS, V21, P1441, DOI 10.1163/156855307781746061
TAGA G, 1991, BIOL CYBERN, V65, P147, DOI 10.1007/BF00198086
Taylor G. W., 2009, P 26 ANN INT C MACH, P1025
Tenenbaum JB, 2000, NEURAL COMPUT, V12, P1247, DOI 10.1162/089976600300015349
Tsuchiya K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1745
Wang J. M., 2007, P 24 INT C MACH LEAR, P975, DOI DOI 10.1145/1273496.1273619
Zhang X., 2009, P IEEE INT C ROB AUT, P659
NR 38
TC 2
Z9 2
U1 0
U2 8
PU SPRINGER
PI NEW YORK
PA 233 SPRING ST, NEW YORK, NY 10013 USA
SN 0340-1200
EI 1432-0770
J9 BIOL CYBERN
JI Biol. Cybern.
PD DEC
PY 2015
VL 109
IS 6
BP 597
EP 610
DI 10.1007/s00422-015-0663-5
PG 14
WC Computer Science, Cybernetics; Neurosciences
SC Computer Science; Neurosciences & Neurology
GA CX2GH
UT WOS:000365514300003
PM 26459123
DA 2018-01-22
ER

PT J
AU Suzuki, Y
Galli, L
Ikeda, A
Itakura, S
Kitazaki, M
AF Suzuki, Yutaka
Galli, Lisa
Ikeda, Ayaka
Itakura, Shoji
Kitazaki, Michiteru
TI Measuring empathy for human and robot hand pain using
electroencephalography
SO SCIENTIFIC REPORTS
LA English
DT Article
ID NEURAL PROCESSES; OTHERS; PERCEPTION; ERP; RESPONSES; EMOTION; SELF
AB This study provides the first physiological evidence of humans' ability to
empathize with robot pain and highlights the difference in empathy for humans and
robots. We performed electroencephalography in 15 healthy adults who observed
either human- or robot-hand pictures in painful or non-painful situations such as a
finger cut by a knife. We found that the descending phase of the P3 component was
larger for the painful stimuli than the non-painful stimuli, regardless of whether
the hand belonged to a human or robot. In contrast, the ascending phase of the P3
component at the frontal-central electrodes was increased by painful human stimuli
but not painful robot stimuli, though the interaction of ANOVA was not significant,
but marginal. These results suggest that we empathize with humanoid robots in late
top-down processing similarly to human others. However, the beginning of the top-
down process of empathy is weaker for robots than for humans.
C1 [Suzuki, Yutaka; Kitazaki, Michiteru] Toyohashi Univ Technol, Dept Comp Sci &
Engn, Toyohashi, Aichi, Japan.
[Galli, Lisa] Free Univ Berlin, Berlin, Germany.
[Ikeda, Ayaka; Itakura, Shoji] Kyoto Univ, Grad Sch Letters, Dept Psychol,
Kyoto, Japan.
RP Kitazaki, M (reprint author), Toyohashi Univ Technol, Dept Comp Sci & Engn,
Toyohashi, Aichi, Japan.
EM mich@cs.tut.ac.jp
FU [25245067]; [25240020]; [26240043]
FX This research was supported by a Grant-in-Aid for Scientific Research
(A) #25245067, #25240020 to S.I., and #26240043 to M.K.
CR Akitsuki Y, 2009, NEUROIMAGE, V47, P722, DOI 10.1016/j.neuroimage.2009.04.091
Avenanti A, 2005, NAT NEUROSCI, V8, P955, DOI 10.1038/nn1481
Baron-Cohen S, 2002, TRENDS COGN SCI, V6, P248, DOI 10.1016/S1364-6613(02)01904-
6
Bartneck C, 2008, INTERACT STUD, V9, P415, DOI 10.1075/is.9.3.04bar
BENJAMINI Y, 1995, J ROY STAT SOC B MET, V57, P289
Bernhardt BC, 2012, ANNU REV NEUROSCI, V35, P1, DOI 10.1146/annurev-neuro-
062111-150536
Cheng Y, 2008, NEUROIMAGE, V40, P1833, DOI 10.1016/j.neuroimage.2008.01.064
Cheng YW, 2007, CURR BIOL, V17, P1708, DOI 10.1016/j.cub.2007.09.020
DAVIS MH, 1983, J PERS SOC PSYCHOL, V44, P113, DOI 10.1037/0022-3514.44.1.113
Decety J, 2008, NEUROPSYCHOLOGIA, V46, P2607, DOI
10.1016/j.neuropsychologia.2008.05.026
Decety J, 2006, THESCIENTIFICWORLDJO, V6, P1146, DOI 10.1100/tsw.2006.221
Decety Jean, 2004, Behav Cogn Neurosci Rev, V3, P71, DOI
10.1177/1534582304267187
Decety J, 2010, DEV NEUROSCI-BASEL, V32, P257, DOI 10.1159/000317771
Decety J, 2010, NEUROIMAGE, V50, P1676, DOI 10.1016/j.neuroimage.2010.01.025
Fan Y, 2008, NEUROPSYCHOLOGIA, V46, P160, DOI
10.1016/j.neuropsychologia.2007.07.023
Gu XS, 2007, NEUROIMAGE, V36, P256, DOI 10.1016/j.neuroimage.2007.02.025
Hajcak G, 2010, DEV NEUROPSYCHOL, V35, P129, DOI 10.1080/87565640903526504
Hamlin JK, 2007, NATURE, V450, P557, DOI 10.1038/nature06288
Han SH, 2008, BRAIN RES, V1196, P85, DOI 10.1016/j.brainres.2007.12.062
Hoffmann L, 2009, LECT NOTES ARTIF INT, V5773, P159, DOI 10.1007/978-3-642-
04380-2_19
Ibanez A, 2011, BRAIN RES, V1398, P72, DOI 10.1016/j.brainres.2011.05.014
Ikezawa S, 2014, SOC COGN AFFECT NEUR, V9, P1561, DOI 10.1093/scan/nst148
Jackson PL, 2005, NEUROIMAGE, V24, P771, DOI 10.1016/j.neuroimage.2004.09.006
Kanakogi Y, 2013, PLOS ONE, V8, DOI 10.1371/journal.pone.0065292
Lamm C, 2007, PLOS ONE, V2, DOI 10.1371/journal.pone.0001292
Lamm C, 2007, J COGNITIVE NEUROSCI, V19, P42, DOI 10.1162/jocn.2007.19.1.42
Lamm C, 2011, NEUROIMAGE, V54, P2492, DOI 10.1016/j.neuroimage.2010.10.014
Li W, 2010, NEUROSCI LETT, V469, P328, DOI 10.1016/j.neulet.2009.12.021
Lyu ZY, 2014, EXP BRAIN RES, V232, P2731, DOI 10.1007/s00221-014-3952-7
Mancini F, 2011, PSYCHOL SCI, V22, P325, DOI 10.1177/0956797611398496
Mella N, 2012, FRONT PSYCHOL, V3, DOI 10.3389/fpsyg.2012.00501
Meng J, 2013, NEUROIMAGE, V72, P164, DOI 10.1016/j.neuroimage.2013.01.024
Meng J, 2012, EXP BRAIN RES, V220, P277, DOI 10.1007/s00221-012-3136-2
Minami T, 2009, NEUROREPORT, V20, P1471, DOI 10.1097/WNR.0b013e3283321cfb
Nass C. I., 1997, Human values and the design of computer technology, P137
Olofsson JK, 2008, BIOL PSYCHOL, V77, P247, DOI 10.1016/j.biopsycho.2007.11.006
Nass C., 1996, MEDIA EQUATION PEOPL
Rosenthal-von der Putten AM, 2014, COMPUT HUM BEHAV, V33, P201, DOI
10.1016/j.chb.2014.01.004
Rosenthal-von der Putten AM, 2013, INT J SOC ROBOT, V5, P17, DOI 10.1007/s12369-
012-0173-8
Saarela MV, 2007, CEREB CORTEX, V17, P230, DOI 10.1093/cercor/bhj141
Sakurai S., 1988, B NARA U ED CULTURAL, V37, P149
Scarf D, 2012, PLOS ONE, V7, DOI 10.1371/journal.pone.0042698
Sessa P, 2014, SOC COGN AFFECT NEUR, V9, P454, DOI 10.1093/scan/nst003
Sheng F, 2012, NEUROIMAGE, V61, P786, DOI 10.1016/j.neuroimage.2012.04.028
Singer T, 2004, SCIENCE, V303, P1157, DOI 10.1126/science.1093535
von der Putten AM, 2010, COMPUT HUM BEHAV, V26, P1641, DOI
10.1016/j.chb.2010.06.012
Wakabayashi Akio, 2006, Shinrigaku Kenkyu, V77, P271
NR 47
TC 8
Z9 10
U1 2
U2 20
PU NATURE PUBLISHING GROUP
PI LONDON
PA MACMILLAN BUILDING, 4 CRINAN ST, LONDON N1 9XW, ENGLAND
SN 2045-2322
J9 SCI REP-UK
JI Sci Rep
PD NOV 3
PY 2015
VL 5
AR 15924
DI 10.1038/srep15924
PG 9
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA CV0SZ
UT WOS:000363964800001
PM 26525705
OA gold
DA 2018-01-22
ER

PT J
AU Somlor, S
Hartanto, RS
Schmitz, A
Sugano, S
AF Somlor, Sophon
Hartanto, Richard Sahala
Schmitz, Alexander
Sugano, Shigeki
TI A novel tri-axial capacitive-type skin sensor
SO ADVANCED ROBOTICS
LA English
DT Article
DE tri-axial force; capacitance measurement; humanoid robot; tactile
sensing; force sensing
ID SHEAR FORCE MEASUREMENT; TACTILE SENSOR; TOUCH; SOFT; SURFACES
AB This paper introduces a novel tri-axial capacitive force sensor. The sensor can
measure the force vector, is embedded in soft 7mm-thick silicone skin, enables
temperature sensitivity compensation and has digital output. To measure the force
vector, tilted capacitive sensor elements are used which are facing in different
directions to differentiate the tangential forces. The sensor is intended for
distributed contact sensing in a robotic skin, but could be also used for other
applications such as novel haptic user interfaces in wearable devices. A series of
experiments was performed and showed good sensor characteristics. The concept of
the tilted force transducers has been proven to have the capability of detecting
the force vector acting on the local sensor surface.
C1 [Somlor, Sophon; Hartanto, Richard Sahala; Schmitz, Alexander] Waseda Univ, Sch
Creat Sci & Engn, Sugano Lab, Shinjuku Ku, Tokyo 1690072, Japan.
[Sugano, Shigeki] Waseda Univ, Sch Creat Sci & Engn, Dept Modern Mech Engn,
Shinjuku Ku, Tokyo 1698555, Japan.
RP Somlor, S (reprint author), Waseda Univ, Sch Creat Sci & Engn, Sugano Lab,
Shinjuku Ku, Okubo 2-4-12, Tokyo 1690072, Japan.
EM sophon@sugano.mech.waseda.ac.jp
FU JSPS [25220005, 15K21443]; Research Institute for Science and
Engineering of Waseda University; Program for Leading Graduate Schools,
'Graduate Program for Embodiment Informatics' of the Ministry of
Education, Culture, Sports, Science and Technology
FX This work was supported by the JSPS Grant-in-Aid for Scientific Research
(S) [grant number 25220005]; JSPS Grant-in-Aid for Young Scientists (B)
[grant number 15K21443]; Research Institute for Science and Engineering
of Waseda University; and the Program for Leading Graduate Schools,
'Graduate Program for Embodiment Informatics' of the Ministry of
Education, Culture, Sports, Science and Technology.
CR Analog Devices, 2011, AD7147 CAPTOUCH PROG
Argall BD, 2010, ROBOT AUTON SYST, V58, P1159, DOI 10.1016/j.robot.2010.07.002
Bridgwater LB, 2012, IEEE INT CONF ROBOT, P3425, DOI 10.1109/ICRA.2012.6224772
Cabibihan JJ, 2014, IEEE SENS J, V14, P2118, DOI 10.1109/JSEN.2013.2295296
Cannata G, 2005, IEEE-RAS INT C HUMAN, P80
Cheng MY, 2010, SENSORS-BASEL, V10, P10211, DOI 10.3390/s101110211
Choi B., 2006, IEEE RSJ INT C, P3779
Dahiya RS, 2013, IEEE SENS J, V13, P4121, DOI 10.1109/JSEN.2013.2279056
Dahiya RS, 2010, IEEE T ROBOT, V26, P1, DOI 10.1109/TRO.2009.2033627
Davison B., 2010, TECHNIQUES ROBUST TO
Dobrzynska JA, 2013, J MICROMECH MICROENG, V23, DOI 10.1088/0960-
1317/23/1/015009
Goger D., 2009, IEEE INT C ROB AUT, P895
Hoshi T, 2006, IEEE INT CONF ROBOT, P3463, DOI 10.1109/ROBOT.2006.1642231
Hosoda K, 2006, ROBOT AUTON SYST, V54, P104, DOI 10.1016/j.robot.2005.09.019
Iwata H, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P3818, DOI 10.1109/ROBOT.2002.1014315
Iwata H., 2009, IEEE INT C ROB AUT, P580, DOI DOI 10.1109/ROBOT.2009.5152702
Jamone L, 2015, IEEE SENS J, V15, P4226, DOI 10.1109/JSEN.2015.2417759
Jockusch J, 1997, IEEE INT CONF ROBOT, P3080, DOI 10.1109/ROBOT.1997.606756
Kojima K, 2013, IEEE INT C INT ROBOT, P2479, DOI 10.1109/IROS.2013.6696705
Lee HK, 2008, J MICROELECTROMECH S, V17, P934, DOI 10.1109/JMEMS.2008.921727
Liu T, 2008, SENSORS, P1513
MAGGIALI M, 2008, 11 MECH FOR BIENN IN
Maiolino P, 2013, IEEE SENS J, V13, P3910, DOI 10.1109/JSEN.2013.2258149
Minato T, 2007, IEEE-RAS INT C HUMAN, P557, DOI 10.1109/ICHR.2007.4813926
Mittendorfer P, 2012, IEEE-RAS INT C HUMAN, P847, DOI
10.1109/HUMANOIDS.2012.6651619
Natale L, 2006, P 6 INT WORKSH EP RO, P87
Oddo CM, 2011, IEEE T ROBOT, V27, P522, DOI 10.1109/TRO.2011.2116930
Ohka M, 2014, PROCEDIA COMPUT SCI, V42, P17, DOI 10.1016/j.procs.2014.11.028
Puangmali P, 2008, IEEE SENS J, V8, P371, DOI 10.1109/JSEN.2008.917481
Saga S., 2006, P EUR 2006 PAR FRANC, P81
Schmitz A, 2014, IEEE RAS INT C HUM R, P1044
Schmitz A, 2011, IEEE T ROBOT, V27, P389, DOI 10.1109/TRO.2011.2132930
Somlor S, 2014, 2014 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION
(SII), P684, DOI 10.1109/SII.2014.7028121
Takahashi H, 2013, SENSOR ACTUAT A-PHYS, V199, P43, DOI
10.1016/j.sna.2013.05.002
Ulmen J, 2010, IEEE INT CONF ROBOT, P4836, DOI 10.1109/ROBOT.2010.5509295
Viry L, 2014, ADV MATER, V26, P2659, DOI 10.1002/adma.201305064
Yamada K., 2002, P 41 SICE ANN C AUG, P131
Yoshikai T, 2012, ADV ARTIF INTELL, V8
Yuan WZ, 2015, IEEE INT CONF ROBOT, P304, DOI 10.1109/ICRA.2015.7139016
NR 39
TC 4
Z9 4
U1 0
U2 17
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD NOV 2
PY 2015
VL 29
IS 21
SI SI
BP 1375
EP 1391
DI 10.1080/01691864.2015.1092394
PG 17
WC Robotics
SC Robotics
GA CV5QT
UT WOS:000364326800001
DA 2018-01-22
ER

PT J
AU Basoeki, F
DallaLibera, F
Ishiguro, H
AF Basoeki, Fransiska
DallaLibera, Fabio
Ishiguro, Hiroshi
TI How do People Expect Humanoids to Respond to Touch?
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Humanoid robot; Human-robot interaction (HRI); Touch; Haptics
ID ROBOT INTERACTION; IMPLEMENTATION; REFLEXES; EMOTION; SYSTEM
AB With close interaction between humans and robots expected to become more and
more frequent in the near future, tactile interaction is receiving increasing
interest. Many advances were made in the fields of tactile sensing and touch
classification. Robot's reactions to touches are usually decided by the robot's
designers and fit to a particular purpose. However, very little investigation has
been directed to the movements that common people expect from robots being touched.
This paper provides an initial step in this direction. Responses that people expect
from a humanoid being touched were collected. These responses were then classified
by automatically grouping similar responses. This allows the identification of
distinct types of responses. Evaluation of how this grouping matches common sense
were then performed. Results showed strong correlation between the automatic
grouping and common sense, providing support to the idea that the automatically
identified types of responses correspond to a plausible classification of robot's
responses to touch.
C1 [Basoeki, Fransiska; DallaLibera, Fabio; Ishiguro, Hiroshi] Osaka Univ, Grad Sch
Engn Sci, Dept Syst Innovat, Toyonaka, Osaka 560, Japan.
RP Basoeki, F (reprint author), Osaka Univ, Grad Sch Engn Sci, Dept Syst Innovat,
1-3 Machikaneyama, Toyonaka, Osaka 560, Japan.
EM fransiska.basoeki@irl.sys.es.osaka-u.ac.jp
FU JSPS Research Fellowship for Young Scientists; JSPS [26880014]
FX The first author was supported by JSPS Research Fellowship for Young
Scientists. The second author was supported by the JSPS Research
Activity Start-up Grant-in-Aid for Scientific Research, Project Number
26880014.
CR Abdi H, 2013, INT J SOC ROBOT, V5, P103, DOI 10.1007/s12369-012-0151-1
Akgun B, 2012, ACMIEEE INT CONF HUM, P391
Albu-Schaffer A, 2011, SPRINGER TRAC ADV RO, V70, P185
Amirabdollahian F, 2011, IEEE ENG MED BIO, P5347, DOI 10.1109/IEMBS.2011.6091323
Argall BD, 2010, ROBOT AUTON SYST, V58, P1159, DOI 10.1016/j.robot.2010.07.002
Banerjee A, 2004, IEEE INT CONF FUZZY, P149
Bartneck C, 2007, AI SOC, V21, P217, DOI 10.1007/s00146-006-0052-7
Bauer C, 2010, IEEE INT C INT ROBOT, P2572, DOI 10.1109/IROS.2010.5648900
Ben Amor H, 2009, LECT NOTES ARTIF INT, V5803, P492
Breazeal C, 2003, INT J HUM-COMPUT ST, V59, P119, DOI 10.1016/S1071-
5819(03)00018-1
Cakmak M, 2012, ACMIEEE INT CONF HUM, P17
Dahiya RS, 2010, IEEE T ROBOT, V26, P1, DOI 10.1109/TRO.2009.2033627
Dahl TS, 2011, ADV INTERACT STUD, V2, P281
Dalla Libera F, 2009, ROBOT AUTON SYST, V57, P846, DOI
10.1016/j.robot.2009.03.013
Duchaine V, 2009, IEEE INT C ROB AUT, P3676, DOI [10.1109/ROBOT.2009.5152595,
DOI 10.1109/ROBOT.2009.5152595]
Evers V, 2008, HUM ROB INT HRI 2008, P139
Field Tiffany, 2003, TOUCH
Fishel JA, 2012, P IEEE RAS-EMBS INT, P1122, DOI 10.1109/BioRob.2012.6290741
Fritzsche M, 2011, 6 ACM IEEE INT C HUM, P139
Gibbons P., 2012, 7 ACM IEEE INT C HUM
Giuliani M, 2010, INT J SOC ROBOT, V2, P253, DOI 10.1007/s12369-010-0052-0
Goetz J, 2003, RO-MAN 2003: 12TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, PROCEEDINGS, P55
Grunwald G, 2001, ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, P347, DOI
10.1109/ROMAN.2001.981928
Hersch M, 2008, IEEE T ROBOT, V24, P1463, DOI 10.1109/TRO.2008.2006703
Hertenstein MJ, 2006, EMOTION, V6, P528, DOI 10.1037/1528-3542.6.3.528
Hertenstein MJ, 2002, HUM DEV, V45, P70, DOI 10.1159/000048154
Hosoda K, 2006, ROBOT AUTON SYST, V54, P104, DOI 10.1016/j.robot.2005.09.019
Kanda T, 2008, IEEE T ROBOT, V24, P725, DOI 10.1109/TRO.2008.921566
Kormushev Petar, 2011, 2011 IEEE International Conference on Robotics and
Automation, P3970
Lecanuet J. P., 2002, INTELLECTICA, V1, P29
Libera FD, 2007, IEEE-RAS INT C HUMAN, P352, DOI 10.1109/ICHR.2007.4813893
Maeda Y., 2008, 2008 IEEE INT C ROB, P19
Minato T., 2009, WORKSH SYN INT 2009
Miwa H, 2003, IEEE INT CONF ROBOT, P3588
Muir DW, 2002, HUM DEV, V45, P95, DOI 10.1159/000048155
Naya F., 1999, IEEE SMC'99 Conference Proceedings. 1999 IEEE International
Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028), P1030, DOI
10.1109/ICSMC.1999.825404
Noda Tomoyuki, 2007, 2007 IEEE/RSJ International Conference on Intelligent
Robots and Systems, P1099
Pais AL, 2013, INT J SOC ROBOT, V5, P477, DOI 10.1007/s12369-013-0204-0
Peternel L, 2014, AUTON ROBOT, V36, P123, DOI 10.1007/s10514-013-9361-0
Pierris G, 2010, 2010 IEEE RO-MAN, P330, DOI 10.1109/ROMAN.2010.5598654
Powers A, 2005, 2005 IEEE International Workshop on Robot and Human Interactive
Communication (RO-MAN), P158
Raffle H. S., 2004, P SIGCHI C HUM FACT, P647, DOI DOI 10.1145/985692.985774
Rau PLP, 2009, COMPUT HUM BEHAV, V25, P587, DOI 10.1016/j.chb.2008.12.025
Routasalo P, 1999, J ADV NURS, V30, P843, DOI 10.1046/j.1365-2648.1999.01156.x
Russell J.A., 1997, PSYCHOL FACIAL EXPRE, P295
Schaal S, 2003, PHILOS T R SOC B, V358, P537, DOI 10.1098/rstb.2002.1258
Schmitz A, 2011, IEEE T ROBOT, V27, P389, DOI 10.1109/TRO.2011.2132930
Silvera-Tawil D, 2014, INT J SOC ROBOT, V6, P489, DOI 10.1007/s12369-013-0223-x
SUITA K, 1995, IEEE INT CONF ROBOT, P3089, DOI 10.1109/ROBOT.1995.525724
Takei Takayoshi, 2008, IEEJ Transactions on Industry Applications, V128, P767,
DOI 10.1541/ieejias.128.767
Tsuji T, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, P3178, DOI 10.1109/IROS.2009.5354562
VORMBROCK JK, 1988, J BEHAV MED, V11, P509, DOI 10.1007/BF00844843
VOYLES RM, 1995, IROS '95 - 1995 IEEE/RSJ INTERNATIONAL CONFERENCE ON
INTELLIGENT ROBOTS AND SYSTEMS: HUMAN ROBOT INTERACTION AND COOPERATIVE ROBOTS,
PROCEEDINGS, VOL 3, P7, DOI 10.1109/IROS.1995.525854
Wang HB, 2012, IEEE INT CONF ROBOT, P3134, DOI 10.1109/ICRA.2012.6224862
Yamada Y, 1997, IEEE-ASME T MECH, V2, P230, DOI 10.1109/3516.653047
Yamane K, 2003, IEEE T ROBOTIC AUTOM, V19, P421, DOI 10.1109/TRA.2003.810579
Yohanan S, 2011, 6 ACM IEEE INT C HUM, P473
Yohanan S, 2012, INT J SOC ROBOT, V4, P163, DOI 10.1007/s12369-011-0126-7
Yoshikai T, 2012, ADV ARTIF INTELL, V8
Ze Ji, 2011, 2011 RO-MAN: The 20th IEEE International Symposium on Robot and
Human Interactive Communication, P433, DOI 10.1109/ROMAN.2011.6005261
NR 60
TC 2
Z9 2
U1 0
U2 4
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD NOV
PY 2015
VL 7
IS 5
BP 743
EP 765
DI 10.1007/s12369-015-0318-7
PG 23
WC Robotics
SC Robotics
GA CW5PL
UT WOS:000365048400014
DA 2018-01-22
ER

PT J
AU Weng, YH
Sugahara, Y
Hashimoto, K
Takanishi, A
AF Weng, Yueh-Hsuan
Sugahara, Yusuke
Hashimoto, Kenji
Takanishi, Atsuo
TI Intersection of "Tokku" Special Zone, Robots, and the Law: A Case Study
on Legal Impacts to Humanoid Robots
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Regulation of robotics; Robot law; Human-robot co-existence; Empirical
legal studies; RT special zone; Humanoid robots
ID FOOT MECHANISM
AB The unique "Tokku" Special Zone for Robot-ics Empirical Testing and Development
(RT special zone) originated in Japan. Since 2003, the world's first RT special
zone had already established in Fukuoka Prefecture, Fukuoka City and Kitakyushu
City. At that time, Takanishi Laboratory, Humanoid Robotics Institute of Waseda
University had conducted many empirical testing within several different spots of
the special zone to evaluate the feasibility for bipedal humanoid robots on public
roads from 2004 to 2007. It is also known as the world's first public roads testing
for bipedal robots. The history of RT special zone is merely 10 years long, but
there are already many special zones established in Fukuoka, Osaka, Gifu, Kanagawa
and Tsukuba. As the development of robotics and its submergence to the society
expand, the importance of RT special zone as an interface for robots and society
will be more apparent. In this paper, our main focus is to view the impacts of the
"Tokku" special zone system to the human-robot co-existence society. We would like
to make a systematic review for RT special zone, and further to investigate the
relationship between RT special zone, robots and the law through a case study on
legal impacts regarding bipedal humanoid robots in which the materials for the case
study come from Waseda University's experiment on WL-16RII and WABIAN-2R at the
Fukuoka RT special zone.
C1 [Weng, Yueh-Hsuan] Peking Univ, Sch Law, Inst Internet Law, Beijing 100871,
Peoples R China.
[Sugahara, Yusuke] Kokushikan Univ, Sch Sci & Engn, Setagaya Ku, Tokyo 1548515,
Japan.
[Hashimoto, Kenji] Waseda Univ, RISE, Shinjuku Ku, Tokyo 1620044, Japan.
[Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst, Dept Modern Mech Engn,
Shinjuku Ku, Tokyo 1628480, Japan.
RP Weng, YH (reprint author), Peking Univ, Sch Law, Inst Internet Law, 5 Yiheyuan
Rd, Beijing 100871, Peoples R China.
EM yhweng@pku.edu.cn; ysugahar@kokushikan.ac.jp; k-hashimoto@ieee.org;
takanisi@waseda.jp
RI Hashimoto, Kenji/M-6442-2017
OI Hashimoto, Kenji/0000-0003-2300-6766; Sugahara,
Yusuke/0000-0003-0222-7180
CR Arkin RC, 2008, P 3 ACM IEEE INT C H
Asada M, 2005, J ROBOT SOC JPN, V23, P150
Asaro PM, 2012, ROBOTICS BUSINESS RE
The Cabinet of Japan, 2014, OUTL ROB RE IN PRESS
The Cabinet Secretariat of Japan, 2002, OUTL LAW SPEC ZON ST
The Cabinet Secretariat of Japan, 2011, INTR COMPR SPEC ZON
The Cabinet Secretariat of Japan, 2003, SPEC ZON STRUCT REF
The Cabinet Secretariat of Japan, 2002, OUTL FUK RT SPEC ZON
Catsoulis J, 2015, NY TIMES
Corkill E, 2009, JAPAN TIMES
Edwards L, 2009, PHYS
Ferri G, 2011, P 2011 IEEE INT C RO
Fujiwara K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1920
Ganapati Priya, 2009, WIRED
Hagita N, 2011, ICAR2011 WORKSH URB
Hashimoto K, 2006, IEEE INT CONF ROBOT, P1219, DOI 10.1109/ROBOT.2006.1641875
Hirukawa H, 2011, PAMPHLET PROJECT PRA
Inoue K., 2012, J ROBOTICS SOC, V30, P234
Ishiguro I, 2012, HUMANS ARTS ANDROIDS
Iwamura Y, 2011, P 6 ACM IEEE INT C H
Jones W, 2014, U MICHIGAN OPEN ROBO
Kagami S, 2007, SPRINGER TRAC ADV RO, V28, P103
Kang HJ, 2010, IEEE INT CONF ROBOT, P5167, DOI 10.1109/ROBOT.2010.5509348
Infield L, 1780, LECT ETHICS
Kayama K, 2006, NICT NEWS, V358
Kelley R, 2010, ADV ROBOTICS, V24, P1861, DOI 10.1163/016918610X527194
Wang Y, 2014, BUSINESS WEEKLY, V1381, P138
Levy David, 2007, LOVE SEX ROBOTS EVOL
Lin P., 2014, WIRED
[Trade and Industry Ministry of Economy], 2004, 2025 HUM ROB CO EX S
Miyashita T, 2012, UBIQUITOUS NETWORK R
Mukaidono M, 2007, BAS CONC SAF DES ISO
Murray P., 2012, FORBES
National Police Agency, 2006, PROC ROAD US PERM RO, V3
National Police Agency, 2003, SPEC REG MEAS FAC RO, V63
National Police Agency, 2012, SPEC REG MEAS MANN M, V92
Hattori H, 2007, J ROBOT SOC JPN, V25, P1159
Robot Policy Council, 2007, SAF GUID NEXT GEN RO
Robot Policy Council, 2006, ROB POL COUNC REP MA
Salvini P, 2010, P IEEE INT S ROB HUM
Salvini P, 2010, ADV ROBOTICS, V24, P1901, DOI 10.1163/016918610X527211
Sugahara Y., 2006, P 1 IEEE RAS EMBS IN, P781
Sugano S, 2003, WABOT HOUSE RES ACHI, P7
Takanishi A, 2009, WORKSH LEG SAF ISS R
Takanishi A, 2005, FUKUOKA REPORT ROBOT
Unsigned Editorial, 2012, NEV LAWM APPR
Unsigned Editorial, 2014, ROBOCON MAZAZINE ROB, P18
Unsigned Editorial, 2011, TSUK COMPR SPEC ZON
Weng Y-H., 2007, P 11 INT C ART INT L, P205, DOI 10.1145/1276318.1276358
Hillenbrand D, 2014, J SCI TECHNOL LAW, V110, P632
Weng YH, 2014, THESIS
Weng YH, 2014, INTERNET LAW REV, V17
Weng YH, 2011, INTERNET LAW REV, V13, P88
Weng YH, 2009, INT J SOC ROBOT, V1, P267, DOI 10.1007/s12369-009-0019-1
Weng YH, 2010, ADV ROBOTICS, V24, P1919, DOI 10.1163/016918610X527220
Zax D, 2010, WALL STREET J
[Anonymous], 2013, PHYS
NR 57
TC 3
Z9 3
U1 3
U2 4
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD NOV
PY 2015
VL 7
IS 5
BP 841
EP 857
DI 10.1007/s12369-015-0287-x
PG 17
WC Robotics
SC Robotics
GA CW5PL
UT WOS:000365048400019
DA 2018-01-22
ER

PT J
AU Shi, C
Shiomi, M
Kanda, T
Ishiguro, H
Hagita, N
AF Shi, Chao
Shiomi, Masahiro
Kanda, Takayuki
Ishiguro, Hiroshi
Hagita, Norihiro
TI Measuring Communication Participation to Initiate Conversation in
Human-Robot Interaction
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Behavior modeling; Initiation of interaction; Natural-HRI
ID SPACE
AB Consider a situation where a robot initiates a conversation with a person. What
is the appropriate timing for such an action? Where is a good position from which
to make the initial greeting? In this study, we analyze human interactions and
establish a model for a natural way of initiating conversation. Our model mainly
involves the participation state and spatial formation. When a person prepares to
participate in a conversation and a particular spatial formation occurs, he/she
feels that he/she is participating in the conversation; once he/she perceives
his/her participation, he/she maintains particular spatial formations. Theories
have addressed human communication related to these concepts, but they have only
covered situations after people start to talk. In this research, we created a
participation state model for measuring communication participation and provided a
clear set of guidelines for how to structure a robot's behavior to start and
maintain a conversation based on the model. Our model precisely describes the
constraints and expected behaviors for the phase of initiating conversation. We
implemented our proposed model in a humanoid robot and conducted both a system
evaluation and a user evaluation in a shop scenario experiment. It was shown that
good recognition accuracy of interaction state in a conversation was achieved with
our proposed model, and the robot implemented with our proposed model was evaluated
as best in terms of appropriateness of behaviors and interaction efficiency
compared with other two alternative conditions.
C1 [Shi, Chao; Shiomi, Masahiro; Kanda, Takayuki; Ishiguro, Hiroshi; Hagita,
Norihiro] Adv Telecommun Res Inst Int IRC HIL, Keihanna Science City, Kyoto, Japan.
[Shi, Chao; Ishiguro, Hiroshi] Osaka Univ, Dept Syst Innovat, Toyonaka, Osaka
5608531, Japan.
RP Shi, C (reprint author), Adv Telecommun Res Inst Int IRC HIL, 2-2-2 Hikaridai,
Keihanna Science City, Kyoto, Japan.
EM shi.chao@irl.sys.es.osaka-u.ac.jp; m-shiomi@atr.jp; kanda@atr.jp;
ishiguro@atr.jp; hagita@atr.jp
RI Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825; Shi, Chao/0000-0002-4544-4639;
SHIOMI, Masahiro/0000-0003-4338-801X
FU KAKENHI [25240042]
FX We'd like to thank everyone who helped with this project. This research
was supported by KAKENHI 25240042.
CR Bergstrom N, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P2623, DOI
10.1109/IROS.2008.4650896
Breazeal C., 2005, IEEE RSJ INT C INT R, P383, DOI DOI 10.1109/IROS.2005.1545011
Carton D, 2013, 13 INT S EXP ROB, P199
CIOLEK TM, 1980, SOCIOL INQ, V50, P237, DOI 10.1111/j.1475-682X.1980.tb00022.x
Clark H. H., 1996, USING LANGUAGE
COHEN J, 1960, EDUC PSYCHOL MEAS, V20, P37, DOI 10.1177/001316446002000104
Dautenhahn K., 2006, ACM SIGCHI SIGART IN, P172
Gockley R., 2007, ACM IEEE INT C HUM R, P17
Goffman E, 1963, BEHAV PUBLIC PLACE N
Hall E.T., 1966, HIDDEN DIMENSION MAN
HARTNETT JJ, 1974, J PSYCHOL, V87, P129, DOI 10.1080/00223980.1974.9915683
Huttenrauch H., 2006, IEEE RSJ INT C INT R, P5052, DOI DOI
10.1109/IROS.2006.282535
Katagiri Y, 2006, LECT NOTES ARTIF INT, V4012, P365
Kendon A., 1990, CONDUCTING INTERACTI, P209
Kendon A., 1980, ASPECTS NONVERBAL CO, P29
Kuno Y, 2007, CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, VOLS 1 AND 2,
P1191
Kuzuoka H, 2000, ACM C COMP SUPP COOP, P155
Kuzuoka H., 2010, ACM IEEE INT C HUM R, P285
Loth S, 2013, FRONT PSYCHOL, V4, DOI 10.3389/fpsyg.2013.00557
Michalowski MP, 2006, IEEE INT WORKSH ADV, P762
Mondada L, 2009, J PRAGMATICS, V41, P1977, DOI 10.1016/j.pragma.2008.09.019
Mutlu B., 2009, ACM IEEE INT C HUM R, P61
Mutlu B., 2006, IEEE INT C HUM ROB H, P518, DOI 10.1109/ichr.2006.321322
Nakano YI, 2010, IUI 2010, P139
Pacchierotti E., 2006, IEEE INT WORKSH ROB, P315
Rich C., 2010, ACM IEEE INT C HUM R, P375
Satake S., 2009, ACM IEEE INT C HUM R, P109
Scassellati B. M., 2001, FDN THEORY MIND HUMA
Shi C, 2010, 2010 IEEE RSJ INT C, P5302
Shi C., 2011, P ROB SCI SYST RSS 2
Shiomi M., 2010, ACM IEEE INT C HUM R, P31
Sidner C. L., 2004, IUI 04, P78, DOI DOI 10.1145/964442.964458
Torta E., 2011, ICSR, V7072, P21
Weiss A, 2011, INTERACT, V2, P230
Woods S., 2005, P AISB 05 S ROB COMP, P126
Woods S, 2006, INTERACT COMPUT, V18, P1390, DOI 10.1016/j.intcom.2006.05.001
Yamaoka F, 2010, IEEE T ROBOT, V26, P187, DOI 10.1109/TRO.2009.2035747
Yamazaki K, 2007, ECSCW 2007: PROCEEDINGS OF THE 10TH EUROPEAN CONFERENCE ON
COMPUTER-SUPPORTED COOPERATIVE WORK, P61, DOI 10.1007/978-1-84800-031-5_4
NR 38
TC 1
Z9 1
U1 1
U2 6
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD NOV
PY 2015
VL 7
IS 5
BP 889
EP 910
DI 10.1007/s12369-015-0285-z
PG 22
WC Robotics
SC Robotics
GA CW5PL
UT WOS:000365048400022
DA 2018-01-22
ER

PT J
AU Pan, YD
Okada, H
Uchiyama, T
Suzuki, K
AF Pan, Yadong
Okada, Haruka
Uchiyama, Toshiaki
Suzuki, Kenji
TI On the Reaction to Robot's Speech in a Hotel Public Space
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Robot's speech; Listening to direct speech; Overhearing indirect speech;
First response; Second response; Hotel public space
AB This article presents a study using social robots in a hotel public space. The
objective of this study is to investigate people's response to robot's different
speech. We used humanoid robot NAO for setting up our experiments: (i) A single
robot greeted each guest. (ii) A single robot engaged in soliloquy about hotel's
information. (iii) Dual robots had conversations between each other about hotel's
information. In each experiment, hotel guests' behavior in response to robot's
speech was studied by using four patterns that reflected the level of a guest's
interest to the robot's speech. With these behavior-patterns, we analyzed the
guests' first and second responses after they encountered with the robot in the
hotel. Our study helps to understand the practical effectiveness of robot's speech
in a public space context, which could inspire the design of hotel-assistive
robots.
C1 [Pan, Yadong] Univ Tsukuba, Empowerment Informat Program, Tsukuba, Ibaraki,
Japan.
[Okada, Haruka] Univ Tsukuba, Sch Art & Design, Tsukuba, Ibaraki, Japan.
[Uchiyama, Toshiaki] Univ Tsukuba, Fac Art & Design, Tsukuba, Ibaraki, Japan.
[Suzuki, Kenji] Univ Tsukuba, Fac Intelligent Interact Technol, Tsukuba,
Ibaraki, Japan.
RP Pan, YD (reprint author), Univ Tsukuba, Empowerment Informat Program, Tsukuba,
Ibaraki, Japan.
EM panyadong@ai.iit.tsukuba.ac.jp; kenji@ieee.org
CR Aarts H, 2000, J PERS SOC PSYCHOL, V78, P53, DOI 10.1037//0022-3514.78.1.53
ASOH H, 1997, P INT JOINT C ART IN, P880
BISCHOFF R, 2003, P INT S EXP ROB ISER, V5, P64
Duranti A, 1997, J LINGUIST ANTHROPOL, V7, P63, DOI DOI 10.1525/JLIN.1997.7.1.63
Eyssel F., 2012, P 7 ANN ACM IEEE INT, P125
Hayashi K., 2007, P ACM IEEE INT C HUM, P137
Heenan Brandon, 2014, P 2014 C DES INT SYS, P855
Kanda T., 2009, P 4 ACM IEEE INT C H, P173
Kendon A., 1990, CONDUCTING INTERACTI
Lee M. K., 2009, P CHI APR, P3769
Pan YD, 2013, ACMIEEE INT CONF HUM, P205, DOI 10.1109/HRI.2013.6483573
Pitsch Karola, 2009, P INT S ROB HUM INT, P985
PLATON E, 2006, ENV MULTIAGENT SYSTE, V3830, P121
Roy N., 2000, P 38 ANN M ASS COMP, P93, DOI DOI 10.3115/1075218.1075231
Sabelli AM, 2011, ACMIEEE INT CONF HUM, P37, DOI 10.1145/1957656.1957669
Schegloff EA, 1967, THESIS
Tanaka F, 2007, P NATL ACAD SCI USA, V104, P17954, DOI 10.1073/pnas.0707769104
Tapus A, 2007, IEEE ROBOT AUTOM MAG, V14, P35, DOI 10.1109/MRA.2007.339605
Thrun S., 1999, IEEE INT C ROB AUT, V3, P1999
NR 19
TC 2
Z9 2
U1 0
U2 0
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD NOV
PY 2015
VL 7
IS 5
BP 911
EP 920
DI 10.1007/s12369-015-0320-0
PG 10
WC Robotics
SC Robotics
GA CW5PL
UT WOS:000365048400023
DA 2018-01-22
ER

PT J
AU Ricardez, GAG
Yamaguchi, A
Takamatsu, J
Ogasawara, T
AF Ricardez, Gustavo Alfonso Garcia
Yamaguchi, Akihiko
Takamatsu, Jun
Ogasawara, Tsukasa
TI Asymmetric Velocity Moderation for human-safe robot control
SO ADVANCED ROBOTICS
LA English
DT Article
DE human-robot interaction; humanoid robots; reactive strategy; human
safety
ID SOCIAL FORCE MODEL; PLANNER
AB With the increasing physical proximity of human-robot interaction, ensuring that
robots do not harm surrounding humans has become crucial. Therefore, we propose
asymmetric velocity moderation as a low-level controller for robotic systems to
enforce human-safe motions. While our method prioritizes human safety, it also
maintains the robot's efficiency. Our proposed method restricts the robot's speed
according to (1) the displacement vector between human and robot, and (2) the
robot's velocity vector. That is to say, both the distance and the relative
direction of movement are taken into account to restrict the robot's motion.
Through real-robot and simulation experiments using simplified HRI scenarios and
dangerous situations, we demonstrate that our method is able to maintain the
robot's efficiency without undermining human safety.
C1 [Ricardez, Gustavo Alfonso Garcia; Yamaguchi, Akihiko; Takamatsu, Jun;
Ogasawara, Tsukasa] Nara Inst Sci & Technol, Grad Sch Informat Sci, Nara 6300192,
Japan.
RP Ricardez, GAG (reprint author), Nara Inst Sci & Technol, Grad Sch Informat Sci,
8916-5 Takayama, Nara 6300192, Japan.
EM garcia-g@is.naist.jp
CR De Santis A, 2008, MECH MACH THEORY, V43, P253, DOI
10.1016/j.mechmachtheory.2007.03.003
Fraichard T, 2012, AUTON ROBOT, V32, P173, DOI 10.1007/s10514-012-9278-z
Gates B., 2007, SCI AM, V296, P58, DOI DOI 10.1038/SCIENTIFICAMERICAN0107-58
Haddadin Sami, 2013, SAFE ROBOTS APPROACH
HELBING D, 1995, PHYS REV E, V51, P4282, DOI 10.1103/PhysRevE.51.4282
Ikuta K, 2003, INT J ROBOT RES, V22, P281, DOI 10.1177/0278364903022005001
[Anonymous], 2011, 102181 ISO
Kuli D, 2005, ROBOTIC SYSTEMS, V22, P383
Kulic D, 2007, AUTON ROBOT, V22, P149, DOI 10.1007/s10514-006-9009-4
Lacevic B, 2013, ROBOTICA, V31, P861, DOI 10.1017/S0263574713000143
Lakoba TI, 2005, SIMUL-T SOC MOD SIM, V81, P339, DOI 10.1177/0037549705052772
Malm T, 2010, INT J SOC ROBOT, V2, P221, DOI 10.1007/s12369-010-0057-8
Ogure T, 2009, IND ROBOT, V36, P469, DOI 10.1108/01439910910980196
Sisbot EA, 2007, IEEE T ROBOT, V23, P874, DOI 10.1109/TRO.2007.904911
Sisbot EA, 2012, IEEE T ROBOT, V28, P1045, DOI 10.1109/TRO.2012.2196303
Yamada Y, 2009, ADV ROBOTICS, V23, P1513, DOI 10.1163/016918609X12469688277217
NR 16
TC 1
Z9 1
U1 0
U2 8
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD SEP 2
PY 2015
VL 29
IS 17
BP 1111
EP 1125
DI 10.1080/01691864.2015.1034173
PG 15
WC Robotics
SC Robotics
GA CR6WK
UT WOS:000361488700001
DA 2018-01-22
ER

PT J
AU Balaguer, C
Asfour, T
Metta, G
Yokoi, K
Monje, CA
AF Balaguer, Carlos
Asfour, Tamim
Metta, Giorgio
Yokoi, Kazuhito
Monje, Concepcion A.
TI Special Issue on "2014 IEEE-RAS International Conference on Humanoid
Robots" Humans and Humanoids Face to Face
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Editorial Material
C1 [Balaguer, Carlos; Monje, Concepcion A.] Univ Carlos III Madrid, E-28903 Getafe,
Spain.
[Asfour, Tamim] Karlsruhe Inst Technol, D-76021 Karlsruhe, Germany.
[Metta, Giorgio] Italian Inst Technol, Genoa, Italy.
[Yokoi, Kazuhito] Natl Inst Adv Ind Sci & Technol, Tsukuba, Ibaraki, Japan.
RP Balaguer, C (reprint author), Univ Carlos III Madrid, E-28903 Getafe, Spain.
EM balaguer@ing.uc3m.es; asfour@kit.edu; giorgio.metta@iit.it;
kazuhito.yokoi@aist.go.jp; cmonje@ing.uc3m.es
RI Yokoi, Kazuhito/K-2046-2012
OI Yokoi, Kazuhito/0000-0003-3942-2027
NR 0
TC 0
Z9 0
U1 0
U2 6
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD SEP
PY 2015
VL 12
IS 3
SI SI
AR 1502001
DI 10.1142/S0219843615020016
PG 4
WC Robotics
SC Robotics
GA CR7ZC
UT WOS:000361569800001
DA 2018-01-22
ER

PT J
AU Yamaguchi, A
Atkeson, CG
Ogasawara, T
AF Yamaguchi, Akihiko
Atkeson, Christopher G.
Ogasawara, Tsukasa
TI Pouring Skills with Planning and Learning Modeled from Human
Demonstrations
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article; Proceedings Paper
CT 14th IEEE-RAS International Conference on Humanoid Robots - Humans and
Humanoids Face to Face
CY NOV 18-20, 2014
CL Madrid, SPAIN
SP Univ Carlos III Madrid
DE Pouring; learning from demonstration; learning from practice; planning
AB We explore how to represent, plan and learn robot pouring. This is a case study
of a complex task that has many variations and involves manipulating non-rigid
materials such as liquids and granular substances. Variations of pouring we
consider are the type of pouring (such as pouring into a glass or spreading a sauce
on an object), material, container shapes, initial poses of containers and target
amounts. The robot learns to select appropriate behaviors from a library of skills,
such as tipping, shaking and tapping, to pour a range of materials from a variety
of containers. The robot also learns to select behavioral parameters. Planning
methods are used to adapt skills for some variations such as initial poses of
containers. We show using simulation and experiments on a PR2 robot that our
pouring behavior model is able to plan and learn to handle a wide variety of
pouring tasks. This case study is a step towards enabling humanoid robots to
perform tasks of daily living.
C1 [Yamaguchi, Akihiko; Atkeson, Christopher G.] Carnegie Mellon Univ, Inst Robot,
Pittsburgh, PA 15213 USA.
[Yamaguchi, Akihiko; Ogasawara, Tsukasa] Nara Inst Sci & Technol, Grad Sch
Informat Sci, Nara 6300192, Japan.
RP Yamaguchi, A (reprint author), Carnegie Mellon Univ, Inst Robot, 5000 Forbes
Ave, Pittsburgh, PA 15213 USA.
EM info@akihikoy.net; cga@cs.cmu.edu; ogasawar@is.naist.jp
OI Yamaguchi, Akihiko/0000-0001-9060-3478
FU Japan Society for the Promotion of Science under the Strategic Young
Researcher Overseas Visits Program for Accelerating Brain Circulation
[G2503]
FX We would like to appreciate Dr. Scott Niekum in Carnegie Mellon
University who assisted our learning from demonstration research. We are
also thankful to Professor Maxim Likhachev's Search-based Planning Lab
in Carnegie Mellon University for making their PR2 robot available for
experiments. A. Yamaguchi was funded in part by Japan Society for the
Promotion of Science under the Strategic Young Researcher Overseas
Visits Program for Accelerating Brain Circulation G2503.
CR BENTIVEGNA DC, 2004, THESIS GEORGIA I TEC
Billard A., 2013, SCHOLARPEDIA, V8, P3824, DOI DOI 10.4249/SCH0LARPEDIA.3824
Bollini M., 2013, SPRINGER TRACTS ADV, V88, P481
Brandi S., 2014, P 14 IEEE RAS INT C, P616
da Silva B. Castro, 2012, P 29 INT C MACH LEAR, P1679
Hansen N, 2006, STUD FUZZ SOFT COMP, V192, P75
Kaelbling LP, 2013, INT J ROBOT RES, V32, P1194, DOI 10.1177/0278364913484072
Kober J, 2012, AUTON ROBOT, V33, P361, DOI 10.1007/s10514-012-9290-3
Kormushev P, 2010, IEEE INT C INT ROBOT, P3232, DOI 10.1109/IROS.2010.5649089
Kroemer O, 2012, IEEE INT CONF ROBOT, P2605, DOI 10.1109/ICRA.2012.6224957
Kroemer OB, 2010, ROBOT AUTON SYST, V58, P1105, DOI 10.1016/j.robot.2010.06.001
KRONANDER K, 2012, P IEEE INT C ROB AUT, P1842
Lavalle S. M., 1998, 9811 IOW STAT U DEP
Lucas B. D., 1981, P INT JOINT C ART IN, V81, P674
Maitin-Shepard J, 2010, IEEE INT CONF ROBOT, P2308, DOI
10.1109/ROBOT.2010.5509439
Muhlig M., 2009, P IEEE INT C ROB AUT, P1177
Pastor P, 2009, P IEEE INT C ROB AUT, P763
Phillips M., 2013, P ROB SCI SYST RSS 1
Rozo L, 2013, 2013 9TH INTERNATIONAL WORKSHOP ON ROBOT MOTION AND CONTROL
(ROMOCO), P227, DOI 10.1109/RoMoCo.2013.6614613
Sutton RS, 1998, REINFORCEMENT LEARNI
Tamosiunaite M, 2011, ROBOT AUTON SYST, V59, P910, DOI
10.1016/j.robot.2011.07.004
Yamaguchi A., 2011, P 11 IEEE RAS INT C, P247
Zucker M, 2012, IEEE INT CONF ROBOT, P1850, DOI 10.1109/ICRA.2012.6225036
NR 23
TC 4
Z9 4
U1 1
U2 2
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD SEP
PY 2015
VL 12
IS 3
SI SI
AR 1550030
DI 10.1142/S0219843615500309
PG 39
WC Robotics
SC Robotics
GA CR7ZC
UT WOS:000361569800008
DA 2018-01-22
ER

PT J
AU Shiga, M
Tangkaratt, V
Sugiyama, M
AF Shiga, Motoki
Tangkaratt, Voot
Sugiyama, Masashi
TI Direct conditional probability density estimation with sparse feature
selection
SO MACHINE LEARNING
LA English
DT Article; Proceedings Paper
CT European Conference on Machine Learning and Principles and Practice of
Knowledge Discovery in Databases (ECMLPKDD)
CY SEP 07-11, 2015
CL Porto, PORTUGAL
SP BNP PARIBAS, ONR Global, Zalando, HUAWEI, Deloitte, Amazon, Xarevis, Farfetch,
NOS, Machine Learning, Data Min & Knowledge, Discovery Deloitte, KNIME, ECCAI,
Cliqz, Technicolor, Univ Bari Aldo Moro
DE Conditional density estimation; Feature selection; Sparse structured
norm
ID DIMENSION REDUCTION; QUANTILE REGRESSION; INVERSE REGRESSION; ALGORITHM;
PATH
AB Regression is a fundamental problem in statistical data analysis, which aims at
estimating the conditional mean of output given input. However, regression is not
informative enough if the conditional probability density is multi-modal,
asymmetric, and heteroscedastic. To overcome this limitation, various estimators of
conditional densities themselves have been developed, and a kernel-based approach
called least-squares conditional density estimation (LS-CDE) was demonstrated to be
promising. However, LS-CDE still suffers from large estimation error if input
contains many irrelevant features. In this paper, we therefore propose an extension
of LS-CDE called sparse additive CDE (SA-CDE), which allows automatic feature
selection in CDE. SA-CDE applies kernel LS-CDE to each input feature in an additive
manner and penalizes the whole solution by a group-sparse regularizer. We also give
a subgradient-based optimization method for SA-CDE training that scales well to
high-dimensional large data sets. Through experiments with benchmark and humanoid
robot transition datasets, we demonstrate the usefulness of SA-CDE in noisy CDE
problems.
C1 [Shiga, Motoki] Gifu Univ, Gifu, Gifu 5011193, Japan.
[Tangkaratt, Voot; Sugiyama, Masashi] Tokyo Inst Technol, Meguro Ku, Tokyo
1528552, Japan.
RP Shiga, M (reprint author), Gifu Univ, 1-1 Yanagido, Gifu, Gifu 5011193, Japan.
EM shiga_m@gifu-u.ac.jp; voot@sg.cs.titech.ac.jp; sugi@cs.titech.ac.jp
FU JSPS KAKENHI [25870322, 23120004]
FX Motoki Shiga was supported by JSPS KAKENHI 25870322. Masashi Sugiyama
was supported by JSPS KAKENHI 23120004 and AOARD. Authors thank Dr.
Ichiro Takeuchi, Nagoya Institute of Technology, for kindly providing
his source codes.
CR Beck A, 2009, SIAM J IMAGING SCI, V2, P183, DOI 10.1137/080716542
Bishop C. M., 2006, PATTERN RECOGNITION
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Cook RD, 2005, J AM STAT ASSOC, V100, P410, DOI 10.1198/016214504000001501
DEMPSTER AP, 1977, J ROY STAT SOC B MET, V39, P1
Fan JQ, 1996, BIOMETRIKA, V83, P189, DOI 10.1093/biomet/83.1.189
Guyon I., 2003, Journal of Machine Learning Research, V3, P1157, DOI
10.1162/153244303322753616
Hall P, 1999, J AM STAT ASSOC, V94, P154, DOI 10.2307/2669691
Hastie T, 2004, J MACH LEARN RES, V5, P1391
Hastie T., 2001, ELEMENTS STAT LEARNI
Holmes M. P., 2007, P C UNC ART INT VANC, P175
LI KC, 1991, J AM STAT ASSOC, V86, P316, DOI 10.2307/2290563
Li YJ, 2007, J AM STAT ASSOC, V102, P255, DOI 10.1198/016214506000000979
Rasmussen CE, 2005, ADAPT COMPUT MACH LE, P1
Silverman B., 1986, DENSITY ESTIMATION S
Sra S, 2012, OPTIMIZATION FOR MACHINE LEARNING, P1
Sugiyama M, 2010, IEICE T INF SYST, VE93D, P583, DOI 10.1587/transinf.E93.D.583
Sutton R. S., 1998, INTRO REINFORCEMENT
Takeuchi I, 2006, J MACH LEARN RES, V7, P1231
Takeuchi I, 2009, NEURAL COMPUT, V21, P533, DOI 10.1162/neco.2008.10-07-628
Tresp V, 2001, ADV NEUR IN, V13, P654
Weisberg S., 1985, APPL LINEAR REGRESSI
Yuan M, 2006, J ROY STAT SOC B, V68, P49, DOI 10.1111/j.1467-9868.2005.00532.x
NR 23
TC 1
Z9 1
U1 1
U2 7
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 0885-6125
EI 1573-0565
J9 MACH LEARN
JI Mach. Learn.
PD SEP
PY 2015
VL 100
IS 2-3
SI SI
BP 161
EP 182
DI 10.1007/s10994-014-5472-x
PG 22
WC Computer Science, Artificial Intelligence
SC Computer Science
GA CP2ZR
UT WOS:000359747100002
DA 2018-01-22
ER

PT J
AU Baddoura, R
Venture, G
AF Baddoura, Ritta
Venture, Gentiane
TI This Robot is Sociable: Close-up on the Gestures and Measured Motion of
a Human Responding to a Proactive Robot
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Social robotics; Sociable robot; Motion measure; Human-robot
interaction; Non-verbal communication; Greeting gestures
ID EMOTION; EXPRESSION; MOVEMENT
AB During an unannounced encounter, we study the appreciation of the robot's
sociable character (measured via the answers to a questionnaire) and the motion
(arm and head movement frequency and smoothness using IMU sensors) of two humans
interacting with a proactive humanoid, knowing that the experiment involves twenty
pairs of participants. We also investigate the dependencies between the
participants' response to the robot's non-verbal actions and their perception of
its sociability. Our results show that the more the humans find the robot sociable,
the more they tend to respond to its engaging gestures and the higher is the
frequency of their arm motion and the smoothness of their head motion when
interacting with it. Therefore, these results show that measurable physical
movements might be significant indicators to investigate a social robot's sociable
character from the human partner's point of view, as well as to possibly infer the
human tendency to interact with it. More generally, the sociable trait attributed
to an assistive robot operating in a public or in a private environment, seems to
be one of the keys to the success of human-robot interactions.
C1 [Baddoura, Ritta] Telecom Ecole Management, Inst Mines Telecom, Evry, France.
[Venture, Gentiane] Tokyo Univ Agr & Technol, Tokyo, Japan.
RP Baddoura, R (reprint author), Telecom Ecole Management, Inst Mines Telecom,
Evry, France.
EM rittabaddoura@yahoo.fr; venture@cc.tuat.ac.jp
RI Venture, Gentiane/E-7060-2013
OI Venture, Gentiane/0000-0001-7767-4765
FU "Women's Future Development Organization", Japan
FX The authors acknowledge the support provided by the "Women's Future
Development Organization", Japan.
CR Baddoura R., 2012, P IEEE RAS INT C HUM, P234
Baddoura R, 2013, INT J SOC ROBOT, V5, P529, DOI 10.1007/s12369-013-0207-x
Boone RT, 1998, DEV PSYCHOL, V34, P1007, DOI 10.1037//0012-1649.34.5.1007
CASTELLANO G, 2007, P AFF COMP INT INT 2, V4738, P71
Dael N, 2013, PERCEPTION, V42, P642, DOI 10.1068/p7364
Eyssel F., 2010, P IEEE INT S ROB HUM, P681
Firth R, 1972, VERBAL BODILY RITUAL, P1
Gillespie D. L., 1983, SOC THEORY, V1, P120, DOI [10.2307/202049, DOI
10.2307/202049]
Hatrigan JA, 2005, NEW HDB METHODS NONV, P137
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kanda T., 2003, P INT JOINT C ART IN, P177
Kim A, 2013, ACMIEEE INT CONF HUM, P159, DOI 10.1109/HRI.2013.6483550
Nejat G, 2013, INT J SOC ROBOT, V5, P171
Powers A, 2006, P 1 ACM SIGCHI SIGAR, P218, DOI DOI 10.1145/1121241.1121280
Rohrer B, 2002, J NEUROSCI, V22, P8297
Sabanovic S., 2006, P 9 INT WORKSH ADV M, P596
Sabelli A., 2011, INT C HUM ROB INT, P37
Salem M, 2013, INT J SOC ROBOT, V5, P313, DOI 10.1007/s12369-013-0196-9
Scherer KR, 1990, ENZYKLOPADIE PSYCHOL, P345
Wallbott HG, 1998, EUR J SOC PSYCHOL, V28, P879, DOI 10.1002/(SICI)1099-
0992(1998110)28:6<879::AID-EJSP901>3.0.CO;2-W
Zhang R, 2011, P 10 INT C AUT AG M, P1301
NR 21
TC 3
Z9 3
U1 0
U2 8
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD AUG
PY 2015
VL 7
IS 4
BP 489
EP 496
DI 10.1007/s12369-015-0279-x
PG 8
WC Robotics
SC Robotics
GA CW5PK
UT WOS:000365048300007
DA 2018-01-22
ER

PT J
AU Pham, QC
Nakamura, Y
AF Pham, Quang-Cuong
Nakamura, Yoshihiko
TI A New Trajectory Deformation Algorithm Based on Affine Transformations
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Biomimetics; cognitive human-robot interaction; humanoid robots;
kinematics; path planning for manipulators
ID DRAWING MOVEMENTS; MOTION PERCEPTION; VELOCITY; SPEED; KINEMATICS;
GENERATION; CURVATURE; FIGURES; LAW
AB We propose a new approach to deform robot trajectories based on affine
transformations. At the heart of our approach is the concept of affine invariance:
Trajectories are deformed in order to avoid unexpected obstacles or to achieve new
objectives but, at the same time, certain definite features of the original motions
are preserved. Such features include, for instance, trajectory smoothness,
periodicity, affine velocity, or more generally, all affine-invariant features,
which are of particular importance in human-centered applications. Furthermore,
this approach enables one to "convert" the constraints and optimization objectives
regarding the deformed trajectory into constraints and optimization objectives
regarding the matrix of the deformation in a natural way, making constraints
satisfaction and optimization substantially easier and faster in many cases. As
illustration, we present an application to the transfer of human movements to
humanoid robots while preserving equiaffine velocity, a well-established invariant
of human hand movements. Building on the presented affine deformation framework, we
finally revisit the concept of trajectory redundancy from the viewpoint of group
theory.
C1 [Pham, Quang-Cuong] Nanyang Technol Univ, Sch Mech & Aerosp Engn, Singapore
639798, Singapore.
[Nakamura, Yoshihiko] Univ Tokyo, Dept Mechano Informat, Tokyo 1138654, Japan.
RP Pham, QC (reprint author), Nanyang Technol Univ, Sch Mech & Aerosp Engn,
Singapore 639798, Singapore.
EM cuong.pham@normalesup.org; nakamura@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014; Pham, Quang-Cuong/G-5050-2012
OI Nakamura, Yoshihiko/0000-0001-7162-5102; Pham, Quang
Cuong/0000-0001-9605-4940
FU Japan Society for the Promotion of Science, Japan [20220001]; Nanyang
Technological University, Singapore; Ministry of Education, Singapore
FX This work was supported by a postdoctoral fellowship and by
Grants-in-Aid for Scientific Research-Category S (20220001) from the
Japan Society for the Promotion of Science, Japan, by a Start-Up Grant
from Nanyang Technological University, Singapore, and by a Tier 1 grant
from the Ministry of Education, Singapore.
CR Bennequin D, 2009, PLOS COMPUT BIOL, V5, DOI 10.1371/journal.pcbi.1000426
Berthoz A., 2009, LA SIMPLEXITE
Casile A, 2010, CEREB CORTEX, V20, P1647, DOI 10.1093/cercor/bhp229
Cheng P, 2008, IEEE T ROBOT, V24, P488, DOI 10.1109/TRO.2007.913993
Dayan E, 2007, P NATL ACAD SCI USA, V104, P20582, DOI 10.1073/pnas.0710033104
deSperati C, 1997, J NEUROSCI, V17, P3932
Duindam V, 2010, INT J ROBOT RES, V29, P789, DOI 10.1177/0278364909352202
Escande A, 2014, INT J ROBOT RES, V33, P1006, DOI 10.1177/0278364914521306
Gleicher M., 1998, P SIGGRAPH 98, V98, P33
Hicheur H, 2005, EXP BRAIN RES, V162, P145, DOI 10.1007/s00221-004-2122-8
Ijspeert A. J., 2002, P IEEE INT C ROB AUT, V2, P1398, DOI DOI
10.1109/R0B0T.2002.1014739
Kandel S, 2000, PERCEPT PSYCHOPHYS, V62, P706, DOI 10.3758/BF03206917
Kanoun O, 2011, IEEE T ROBOT, V27, P785, DOI 10.1109/TRO.2011.2142450
LACQUANITI F, 1983, ACTA PSYCHOL, V54, P115, DOI 10.1016/0001-6918(83)90027-6
Lamiraux F, 2004, IEEE T ROBOTIC AUTOM, V20, P967, DOI 10.1109/TRO.2004.829459
Laumond J. P., 1998, ROBOT MOTION PLANNIN
Lee J, 1999, COMP GRAPH, P39
Maoz U, 2014, J NEUROPHYSIOL, V111, P336, DOI 10.1152/jn.01071.2012
Maoz U, 2009, J NEUROPHYSIOL, V101, P1002, DOI 10.1152/jn.90702.2008
Nakamura Y., 1990, ADV ROBOTICS REDUNDA
Pham Q.-C., 2012, P ROB SCI SYST, P329
Pham Q.-C., 2011, P ROB SCI SYST, P265
Pollick FE, 1997, VISION RES, V37, P347, DOI 10.1016/S0042-6989(96)00116-2
Pollick FE, 2009, CORTEX, V45, P325, DOI 10.1016/j.cortex.2008.03.010
Pham QC, 2012, IEEE INT CONF ROBOT, P3212, DOI 10.1109/ICRA.2012.6225121
ROSE C, 1996, P SIGGRAPH 96, P147
SCHWARTZ AB, 1994, SCIENCE, V265, P540, DOI 10.1126/science.8036499
Seiler K, 2010, SPRINGER TRACTS ADV, V68, P37
SHNEIDERMAN B, 1997, P 2 INT C INT US INT, P33, DOI DOI 10.1145/238218.238281
Shoemake K., 1992, P C GRAPH INT, V92, P258
Siciliano B, 1991, P IEEE INT C ADV ROB, V2, P1211, DOI DOI
10.1109/ICAR.1991.240390
Ude A., 2000, P IEEE INT C ROB AUT, V3, P2223
Ude A, 2010, IEEE T ROBOT, V26, P800, DOI 10.1109/TRO.2010.2065430
VIVIANI P, 1985, J EXP PSYCHOL HUMAN, V11, P828, DOI 10.1037/0096-1523.11.6.828
VIVIANI P, 1991, J EXP PSYCHOL HUMAN, V17, P198, DOI 10.1037/0096-1523.17.1.198
Yamane K, 2004, ACM T GRAPHIC, V23, P532, DOI 10.1145/1015706.1015756
Yamane K, 2003, IEEE T ROBOTIC AUTOM, V19, P421, DOI 10.1109/TRA.2003.810579
Yamane K, 2003, IEEE T VIS COMPUT GR, V9, P352, DOI 10.1109/TVCG.2003.1207443
NR 38
TC 2
Z9 2
U1 1
U2 9
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD AUG
PY 2015
VL 31
IS 4
BP 1054
EP 1063
DI 10.1109/TRO.2015.2450413
PG 10
WC Robotics
SC Robotics
GA CO5TK
UT WOS:000359221200019
DA 2018-01-22
ER

PT J
AU Lu, ZG
Sekiyama, K
Aoyama, T
Hasegawa, Y
Kobayashi, T
Fukuda, T
AF Lu, Zhiguo
Sekiyama, Kosuke
Aoyama, Tadayoshi
Hasegawa, Yasuhisa
Kobayashi, Taisuke
Fukuda, Toshio
TI Energetically Efficient Ladder Descent Motion With Internal Stress and
Body Motion Optimized for a Multilocomotion Robot
SO IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
LA English
DT Article
DE Closed chain; energy efficiency; internal stress; multilocomotion robot
(MLR)
ID HARNESS ASSEMBLY SYSTEMS; HUMANOID ROBOT; ELECTRIC CONNECTORS; FORCE
DISTRIBUTION; FAULT-DETECTION; GENERATION; MODEL; FORMULATION
AB Energy efficiency of locomotion is a significant issue for autonomous mobile
robots. This paper focuses on the pace gait ladder descent motion that has a closed
kinematic chain formed by the robot links and the environment. To reduce the energy
cost in the closed kinematic chain, we propose an optimal control strategy by
optimizing the internal stress and motion trajectories in parametric form. As the
main contributions of this paper, three types of energetically efficient ladder
descent motions are generated with different motion-mode assumptions. Critical
factors, including cycle time, horizontal distance between the robot and the
vertical ladder, and value of internal stress, are analyzed theoretically.
Simulation and experimental results indicate that the proposed control strategy is
effective for planning an energetically efficient ladder descent motion.
C1 [Lu, Zhiguo] Northeastern Univ, Dept Mech Engn & Automat, Shenyang 110819,
Peoples R China.
[Sekiyama, Kosuke; Hasegawa, Yasuhisa; Kobayashi, Taisuke; Fukuda, Toshio]
Nagoya Univ, Dept Micronano Syst Engn, Nagoya, Aichi 4648603, Japan.
[Aoyama, Tadayoshi] Hiroshima Univ, Dept Syst Cybernet, Higashihiroshima
7398527, Japan.
RP Lu, ZG (reprint author), Northeastern Univ, Dept Mech Engn & Automat, Shenyang
110819, Peoples R China.
EM zglu@me.neu.edu.cn; sekiyama@mein.nagoya-u.ac.jp;
aoyama@robotics.hiroshima-u.ac.jp; hasegawa@mein.nagoya-u.ac.jp;
kobayashi@robo.mein.nagoya-u.ac.jp; fukuda@mein.nagoya-u.ac.jp
FU Fundamental Research Funds for the Central Universities [N140304008];
Research Fund for the Doctoral Program of Higher Education of China
[20130042120027]; University Innovation Team of Liaoning Province
[LT2014006]
FX Manuscript received May 17, 2014; revised September 3, 2014 and October
11, 2014; accepted November 20, 2014. Date of publication January 30,
2015; date of current version June 26, 2015. This work was supported in
part by the Fundamental Research Funds for the Central Universities
(N140304008), in part by the Research Fund for the Doctoral Program of
Higher Education of China (20130042120027), and in part by the
University Innovation Team of Liaoning Province (LT2014006).
CR Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Craig J. J., 2004, INTRO ROBOTICS MECH
Erbatur K, 2009, IEEE T IND ELECTRON, V56, P835, DOI 10.1109/TIE.2008.2005150
Escande A, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P435, DOI 10.1109/IROS.2009.5354371
Fu CL, 2008, IEEE T IND ELECTRON, V55, P2111, DOI 10.1109/TIE.2008.921205
Harada K, 2006, IEEE T ROBOT, V22, P568, DOI 10.1109/TRO.2006.870649
Huang J, 2008, IEEE-ASME T MECH, V13, P86, DOI 10.1109/TMECH.2007.915063
Huang JA, 2010, IEEE T CONTR SYST T, V18, P1207, DOI 10.1109/TCST.2009.2034735
Hung MH, 2000, IEEE T SYST MAN CY B, V30, P529, DOI 10.1109/3477.865170
Hyon SH, 2007, IEEE T ROBOT, V23, P884, DOI 10.1109/TRO.2007.904896
KERR J, 1986, INT J ROBOT RES, V4, P3, DOI 10.1177/027836498600400401
KHATIB O, 1987, IEEE T ROBOTIC AUTOM, V3, P43, DOI 10.1109/JRA.1987.1087068
KUMAR VR, 1988, IEEE T ROBOTIC AUTOM, V4, P657, DOI 10.1109/56.9303
Lengagne S., 2010, P IEEE RAS INT C HUM, P14
Lu ZG, 2012, IEEE INT C INT ROBOT, P2216, DOI 10.1109/IROS.2012.6385887
Norton R. L., 1999, DESIGN MACHINERY INT
Ohashi E, 2007, IEEE T IND ELECTRON, V54, P1632, DOI 10.1109/TIE.2007.894728
Pchelkin S, 2011, P IEEE RSJ INT C INT, P5094
Raibert M. H., 1981, T ASME, V103, P126
Righetti L, 2011, IEEE INT CONF ROBOT, P1085, DOI 10.1109/ICRA.2011.5980156
Saab L, 2013, IEEE T ROBOT, V29, P346, DOI 10.1109/TRO.2012.2234351
Sato T, 2011, IEEE T IND ELECTRON, V58, P376, DOI 10.1109/TIE.2010.2052535
Sentis L, 2010, IEEE T ROBOT, V26, P483, DOI 10.1109/TRO.2010.2043757
Shimmyo S, 2013, IEEE T IND ELECTRON, V60, P5137, DOI 10.1109/TIE.2012.2221111
Ugurlu B, 2010, IEEE T IND ELECTRON, V57, P1701, DOI 10.1109/TIE.2009.2032439
Wakita K, 2013, IEEE-ASME T MECH, V18, P285, DOI 10.1109/TMECH.2011.2169980
Xu Y., 1988, Proceedings of the 1988 IEEE International Conference on Robotics
and Automation (Cat. No.88CH2555-1), P1173, DOI 10.1109/ROBOT.1988.12220
Yu XX, 2014, IEEE T IND ELECTRON, V61, P1053, DOI 10.1109/TIE.2013.2266080
NR 28
TC 1
Z9 1
U1 0
U2 9
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 0278-0046
EI 1557-9948
J9 IEEE T IND ELECTRON
JI IEEE Trans. Ind. Electron.
PD AUG
PY 2015
VL 62
IS 8
BP 4972
EP 4984
DI 10.1109/TIE.2015.2396872
PG 13
WC Automation & Control Systems; Engineering, Electrical & Electronic;
Instruments & Instrumentation
SC Automation & Control Systems; Engineering; Instruments & Instrumentation
GA CL9BG
UT WOS:000357268300031
DA 2018-01-22
ER

PT J
AU Takahashi, Y
Hatano, H
Maida, Y
Usui, K
Maeda, Y
AF Takahashi, Yasutake
Hatano, Hiroki
Maida, Yosuke
Usui, Kazuyuki
Maeda, Yoichiro
TI Motion Segmentation and Recognition for Imitation Learning and Influence
of Bias for Learning Walking Motion of Humanoid Robot Based on Human
Demonstrated Motion
SO JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT
INFORMATICS
LA English
DT Article
DE motion segmentation and recognition; imitation learning; learning bias;
humanoid robot; via-point representation
AB Two main issues arise in practical imitation learning by humanoid robots
observing human behavior the first is segmenting and recognizing motion
demonstrated naturally by a human beings and the second is utilizing the
demonstrated motion for imitation learning. Specifically, the first involves motion
segmentation and recognition based on the humanoid robot motion repertoire for
imitation learning and the second introduces learning bias based on demonstrated
motion in the humanoid robot's imitation learning to walk. We show the validity of
our motion segmentation and recognition in a practical way and report the results
of our investigation in the influence of learning bias in humanoid robot
simulations.
C1 [Takahashi, Yasutake; Hatano, Hiroki] Univ Fukui, Grad Sch Engn, Dept Human &
Artificial Intelligent Syst, 3-9-1 Bunkyo, Fukui, Fukui 9108507, Japan.
[Maida, Yosuke; Usui, Kazuyuki] Univ Fukui, Fac Engn, Dept Human & Artificial
Intelligent Syst, Fukui, Fukui 9108507, Japan.
[Maeda, Yoichiro] Osaka Inst Technol, Fac Engn, Dept Robot, Asahi Ku, Osaka
5358585, Japan.
RP Takahashi, Y (reprint author), Univ Fukui, Grad Sch Engn, Dept Human &
Artificial Intelligent Syst, 3-9-1 Bunkyo, Fukui, Fukui 9108507, Japan.
EM yasutake@ir.his.u-fukui.ac.jp; hhatano@ir.his.u-fukui.ac.jp;
kusui@ir.his.u-fukui.ac.jp; maeda@bme.oit.ac.jp
CR Asada M, 1996, MACH LEARN, V23, P279, DOI 10.1023/A:1018237008823
Hamahata K, 2008, ISDA 2008: EIGHTH INTERNATIONAL CONFERENCE ON INTELLIGENT
SYSTEMS DESIGN AND APPLICATIONS, VOL 3, PROCEEDINGS, P121, DOI
10.1109/ISDA.2008.325
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Kohl N, 2004, IEEE INT CONF ROBOT, P2619, DOI 10.1109/ROBOT.2004.1307456
kuzawa Y., 2011, T I ELECT ENG JAPA C, V131, P655
Miyamoto H, 1998, NEURAL NETWORKS, V11, P1331, DOI 10.1016/S0893-6080(98)00062-8
Okuzawa M. K. Y., 2009, IEEE INT S MICR HUM, P586
Okuzawa Y., 2011, IEEJ T ELECT INFORM, V131, P655
Schaal S.Ijspeert A., 2004, COMPUTATIONAL APPROA, P199
Takahashi S., 2011, J ADV COMPUTATIONAL, V15, P1030
Takahashi Y., 2012, P WCCI 2012 IEEE WOR, P945
Takahashi Y, 2010, ROBOT AUTON SYST, V58, P855, DOI 10.1016/j.robot.2010.03.006
Whitehead S., 1993, ROBOT LEARNING, P45
NR 13
TC 0
Z9 0
U1 0
U2 0
PU FUJI TECHNOLOGY PRESS LTD
PI TOKYO
PA 4F TORANOMON SANGYO BLDG, 2-29, TORANOMON 1-CHOME, MINATO-KU, TOKYO,
105-0001, JAPAN
SN 1343-0130
EI 1883-8014
J9 J ADV COMPUT INTELL
JI J. Adv. Comput. Intell. Inform.
PD JUL
PY 2015
VL 19
IS 4
BP 532
EP 543
PG 12
WC Computer Science, Artificial Intelligence
SC Computer Science
GA DD8PW
UT WOS:000370190200006
DA 2018-01-22
ER

PT J
AU Ugur, E
Nagai, Y
Celikkanat, H
Oztop, E
AF Ugur, Emre
Nagai, Yukie
Celikkanat, Hande
Oztop, Erhan
TI Parental scaffolding as a bootstrapping mechanism for learning grasp
affordances and imitation skills
SO ROBOTICA
LA English
DT Article
DE Developmental robotics; Affordance; Imitation; Parental scaffolding;
Motionese
ID MOTOR RESONANCE; HUMANOID ROBOT; INFANTS; OBJECTS; SHAPE; TASK;
MOTIONESE; MOTHERESE
AB Parental scaffolding is an important mechanism that speeds up infant
sensorimotor development. Infants pay stronger attention to the features of the
objects highlighted by parents, and their manipulation skills develop earlier than
they would in isolation due to caregivers' support. Parents are known to make
modifications in infant-directed actions, which are often called "motionese". 7 The
features that might be associated with motionese are amplification, repetition and
simplification in caregivers' movements, which are often accompanied by increased
social signalling. In this paper, we extend our previously developed affordances
learning framework to enable our hand-arm robot equipped with a range camera to
benefit from parental scaffolding and motionese. We first present our results on
how parental scaffolding can be used to guide the robot learning and to modify its
crude action execution to speed up the learning of complex skills. For this
purpose, an interactive human caregiver-infant scenario was realized with our
robotic setup. This setup allowed the caregiver's modification of the ongoing reach
and grasp movement of the robot via physical interaction. This enabled the
caregiver to make the robot grasp the target object, which in turn could be used by
the robot to learn the grasping skill. In addition to this, we also show how
parental scaffolding can be used in speeding up imitation learning. We present the
details of our work that takes the robot beyond simple goal-level imitation, making
it a better imitator with the help of motionese.
C1 [Ugur, Emre] Univ Innsbruck, Inst Comp Sci, A-6020 Innsbruck, Austria.
[Ugur, Emre] Adv Telecommun Res Inst, Kyoto, Japan.
[Nagai, Yukie] Osaka Univ, Grad Sch Engn, Osaka, Japan.
[Celikkanat, Hande] Middle E Tech Univ, Dept Comp Engn, Ankara, Turkey.
[Oztop, Erhan] Ozyegin Univ, Dept Comp Sci, Istanbul, Turkey.
RP Ugur, E (reprint author), Univ Innsbruck, Inst Comp Sci, A-6020 Innsbruck,
Austria.
EM emre.ugur@uibk.ac.at
FU European Community's Seventh Framework Programme [270273, 321700];
JSPS/MEXT [24000012, 24119003]; Ministry of Internal Affairs and
Communications, Japan
FX This research was partially supported by European Community's Seventh
Framework Programme FP7/2007-2013 (Specific Programme Cooperation, Theme
3, Information and Communication Technologies) under grant agreement no.
270273, Xperience; by European Community's Seventh Framework Programme
FP7/2007-2013 under the grant agreement no. 321700, Converge; and by
JSPS/MEXT Grants-in-Aid for ScientificResearch (Research Project Number:
24000012, 24119003). It was also partially funded by a contract in H23
with the Ministry of Internal Affairs and Communications, Japan,
entitled 'Novel and innovative R&D making use of brain structures'.
CR Argall BD, 2010, P 9 IEEE INT C DEV L, P7
Babic J, 2011, ADAPT BEHAV, V19, P250, DOI 10.1177/1059712311411112
Barrett TM, 2008, DEV PSYCHOBIOL, V50, P97, DOI 10.1002/dev.20280
Berk L. E., 1995, SCAFFOLDING CHILDREN
Bicchi A., 2000, P IEEE INT C ROB AUT, V1, P348
Billard A, 2001, CYBERNET SYST, V32, P155, DOI 10.1080/019697201300001849
Brand RJ, 2002, DEVELOPMENTAL SCI, V5, P72, DOI 10.1111/1467-7687.00211
Breazeal C., 1999, THESIS MIT CAMBRIDGE
Calinon S, 2007, IEEE T SYST MAN CY B, V37, P286, DOI 10.1109/TSMCB.2006.886952
Carpenter M, 2002, CHILD DEV, V73, P1431, DOI 10.1111/1467-8624.00481
Detry R, 2010, STUD COMPUT INTELL, V264, P451
Detry R., 2011, PALADYN, V2, P1
FERNALD A, 1985, INFANT BEHAV DEV, V8, P181, DOI 10.1016/S0163-6383(85)80005-9
Fischer K, 2011, INTERACT STUD, V12, P134, DOI 10.1075/is.12.1.06fis
Gams A, 2009, AUTON ROBOT, V27, P3, DOI 10.1007/s10514-009-9118-y
Gergely G, 2002, NATURE, V415, P755
Goubeta N., 2006, INFANT CHILD DEV, V15, P161
Haralick R. M., 1992, COMPUTER ROBOT VISIO, VI
Herzog A, 2014, AUTON ROBOT, V36, P51, DOI 10.1007/s10514-013-9366-8
HODAPP RM, 1984, CHILD DEV, V55, P772, DOI 10.2307/1130128
Kawato M, 2007, CURR OPIN NEUROBIOL, V17, P205, DOI 10.1016/j.conb.2007.03.004
Koterba EA, 2009, INFANT BEHAV DEV, V32, P437, DOI 10.1016/j.infbeh.2009.07.003
Kushida D., 2001, Artificial Life and Robotics, V5, P26, DOI 10.1007/BF02481317
MASATAKA N, 1992, INFANT BEHAV DEV, V15, P453, DOI 10.1016/0163-6383(92)80013-K
Moore B, 2012, ROBOT AUTON SYST, V60, P441, DOI 10.1016/j.robot.2011.09.002
Murata A, 2000, J NEUROPHYSIOL, V83, P2580
Nagai Y., 2008, P IEEE 7 INT C DEV L
Nagai Y., 2010, P 10 INT C EP ROB, P81
Nagai Y, 2007, P 4 INT S IM AN ART, P299
Nagai Y, 2009, IEEE T AUTON MENT DE, V1, P44, DOI 10.1109/TAMD.2009.2021090
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Nehaniv CL, 2001, CYBERNET SYST, V32, P11, DOI 10.1080/019697201300001803
Pastor P, 2011, IEEE INT C INT ROBOT, P365, DOI 10.1109/IROS.2011.6048819
Paulus M, 2011, CHILD DEV, V82, P1047, DOI 10.1111/j.1467-8624.2011.01610.x
Peterson John, 2003, P 3 IEEE RAS INT C H, pI
Rohlfing KJ, 2006, ADV ROBOTICS, V20, P1183, DOI 10.1163/156855306778522532
Sahin E, 2007, ADAPT BEHAV, V15, P447, DOI 10.1177/1059712307084689
Saunders J., 2006, P 1 ACM SIGCHI SIGAR, DOI [DOI 10.1145/1121241.1121263,
10.1145/1121241.1121263]
Saunders J., 2007, INT J ADV ROBOT SYST, V4, P109
Saxena A, 2008, INT J ROBOT RES, V27, P157, DOI 10.1177/0278364907087172
Schaal S, 2006, ADAPTIVE MOTION OF ANIMALS AND MACHINES, P261, DOI 10.1007/4-
431-31381-8_23
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Tamis-LeMonda C. S., 2013, IEEE T AUTON MENTAL, V5
Tomasello Michael, 1996, P319, DOI 10.1016/B978-012273965-1/50016-9
Ude A, 2010, IEEE T ROBOT, V26, P800, DOI 10.1109/TRO.2010.2065430
Ugur E., 2011, P IEEE INT C HUM ROB, P480
Ugur E., 2009, P 9 INT C EP ROB EPI, V146, P177
Ugur E., 2012, IEEE RSJ INT C INT R, P3260
Ugur E, 2011, ROBOT AUTON SYST, V59, P580, DOI 10.1016/j.robot.2011.04.005
van Elk M, 2008, NEUROIMAGE, V43, P808, DOI 10.1016/j.neuroimage.2008.07.057
VANDERMEER ALH, 1995, SCIENCE, V267, P693, DOI 10.1126/science.7839147
WOOD D, 1976, J CHILD PSYCHOL PSYC, V17, P89, DOI 10.1111/j.1469-
7610.1976.tb00381.x
Zukow-Goldring P, 2007, NEUROCOMPUTING, V70, P2181, DOI
10.1016/j.neucom.2006.02.029
NR 53
TC 3
Z9 3
U1 1
U2 16
PU CAMBRIDGE UNIV PRESS
PI NEW YORK
PA 32 AVENUE OF THE AMERICAS, NEW YORK, NY 10013-2473 USA
SN 0263-5747
EI 1469-8668
J9 ROBOTICA
JI Robotica
PD JUN
PY 2015
VL 33
IS 5
SI SI
BP 1163
EP 1180
DI 10.1017/S0263574714002148
PG 18
WC Robotics
SC Robotics
GA CH9NW
UT WOS:000354363500010
DA 2018-01-22
ER

PT J
AU Kobayashi, T
Sekiyama, K
Aoyama, T
Fukuda, T
AF Kobayashi, Taisuke
Sekiyama, Kosuke
Aoyama, Tadayoshi
Fukuda, Toshio
TI Cane-supported walking by humanoid robot and falling-factor-based
optimal cane usage selection
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Humanoid robots; Cane usage; Selection algorithm for locomotion;
Q-learning
ID MODEL
AB In this study, we propose a scheme that enables a humanoid robot to perform
cane-supported walking and to select the optimal cane usage depending on its
condition. A reaction force is generated through the cane-ground interaction.
Although the optimal cane-ground contact position leads to the most effective
reaction force from the viewpoint of the walking capability, the cane position is
constrained during such contact. Thus, different cane utilities can be achieved
based on the priority assigned to the reaction force and cane position. In this
light, we propose cyclic, leg-like, and preventive usages, each of which has
different priorities. These different priorities lead to different utilities such
as the expansion of the support polygon, load distribution, and posture recovery.
The robot selects the most suitable of these three cane usages using a selection
algorithm for locomotion that is combined with Q-learning (Us-SAL). Through
simulations, the robot is made to learn the affinities between these cane usages
and falling factors as Q-values and to accordingly select the optimal cane usage.
As a result, the robot can perform the desired walking by selecting the optimal
cane usage depending on its condition. We conducted walking experiments for each
cane usage and found that each improved the walking stability for a suitable
falling factor. Furthermore, we experimentally validated Us-SAL; the robot selected
the optimal cane usage depending on its condition and walked in a complex
environment without falling. (C) 2015 Elsevier B.V. All rights reserved.
C1 [Kobayashi, Taisuke; Sekiyama, Kosuke] Nagoya Univ, Dept Micronano Syst Engn,
Nagoya, Aichi 4648601, Japan.
[Aoyama, Tadayoshi] Hiroshima Univ, Dept Syst Cybernet, Hiroshima 730, Japan.
[Fukuda, Toshio] Meijo Univ, Fac Sci & Engn, Nagoya, Aichi, Japan.
[Fukuda, Toshio] Nagoya Univ, Inst Adv Res, Nagoya, Aichi 4648601, Japan.
[Fukuda, Toshio] Beijing Inst Technol, Sch Mechatron Engn, Intelligent Robot
Inst, Beijing, Peoples R China.
RP Kobayashi, T (reprint author), Nagoya Univ, Dept Micronano Syst Engn, Nagoya,
Aichi 4648601, Japan.
EM kobayashi@robo.mein.nagoya-u.ac.jp
CR AKAIKE H, 1974, IEEE T AUTOMAT CONTR, VAC19, P716, DOI 10.1109/TAC.1974.1100705
Aoyama T, 2009, IEEE-ASME T MECH, V14, P712, DOI 10.1109/TMECH.2009.2032777
Bennett L, 1979, Bull Prosthet Res, P38
Boonsinsukh R, 2012, PHYS THER, V92, P1117, DOI 10.2522/ptj.20120036
Boonsinsukh R, 2009, ARCH PHYS MED REHAB, V90, P919, DOI
10.1016/j.apmr.2008.12.022
Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Englsberger J, 2011, IEEE INT C INT ROBOT, P4420, DOI 10.1109/IROS.2011.6048045
Fukuda T., 2012, MULTILOCOMOTION ROBO
Kajita S, 2003, ADV ROBOTICS, V17, P131, DOI 10.1163/156855303321165097
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
Khatib O., 2014, IEEE INT C ROB AUT, P2319
Kim DS, 2009, J VISUAL IMPAIR BLIN, V103, P519
Kobayashi T, 2013, IEEE INT C INT ROBOT, P2616, DOI 10.1109/IROS.2013.6696725
Koyanagi K., 2008, IEEE RSJ INT C INT R, P2617
Kuehn D., 2014, INT J ADV ROBOT SYST, V11
Nishide S, 2012, IEEE T AUTON MENT DE, V4, P139, DOI 10.1109/TAMD.2011.2177660
Pourret O, 2008, BAYESIAN NETWORKS PR
PRATT J, 2006, IEEE RAS INT C HUM R, P200, DOI DOI 10.1109/ICHR.2006.321385
Sutton RS, 1998, REINFORCEMENT LEARNI
Toda K., 2007, HUMANOID ROBOTS HUMA, P228
NR 20
TC 1
Z9 2
U1 0
U2 13
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD JUN
PY 2015
VL 68
BP 21
EP 35
DI 10.1016/j.robot.2015.02.005
PG 15
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA CF6LZ
UT WOS:000352669500003
DA 2018-01-22
ER

PT J
AU Sung, CH
Kagawa, T
Uno, Y
AF Sung, Changhyun
Kagawa, Takahiro
Uno, Yoji
TI Synthesis of humanoid whole-body motion with smooth transition
SO ADVANCED ROBOTICS
LA English
DT Article
DE whole-body motion; smooth transition; via-point representation; motion
synthesis; humanoid robot
ID OPTIMIZATION; ROBOT; MODEL; MANIPULATORS; BALANCE
AB In planning specific sequential motions of a humanoid, it is desired to transfer
from one motion to another motion without stopping between motions. The problem of
achieving continuous performance by connecting motions is referred to as motion
synthesis in the robotic fields. To solve this problem, it is important to assign
an appropriate transition motion that connects a prior motion to a posterior
motion. In this research, we propose a new method of generating smooth transition
motions under various constraints such as maintaining balance and collision
avoidance. Our algorithm comprises two sequential processes based on optimization
techniques. First, we assign transition instants of the motions with the
formulation of a minimax problem. Second, we connect the transition instants
smoothly to satisfy various constraints throughout the transition. Our method is
applied to synthesize the walking and kicking motions of the humanoid robot HOAP-3.
We confirm that the robot achieves a given task by connecting motions continuously.
Furthermore, the motion performance is improved with the planning of transition
motion using our method. In particular, the robot kicks a ball quickly under a more
stable condition for maintaining balance. These results show that the proposed
method achieves effective motion planning by solving the motion synthesis problem.
C1 [Sung, Changhyun; Kagawa, Takahiro; Uno, Yoji] Nagoya Univ, Grad Sch Engn,
Chikusa Ku, Nagoya, Aichi 4648603, Japan.
RP Sung, CH (reprint author), Nagoya Univ, Grad Sch Engn, Chikusa Ku, Furo Cho,
Nagoya, Aichi 4648603, Japan.
EM s_changhyun@nuem.nagoya-u.ac.jp
FU Tatematsu Foundation; [26289129]; [26280101]; [25820085]
FX This study was supported by Grants-in-Aid for Scientific Research (B)
[grant number 26289129], [grant number 26280101] and the Young
Researcher (B) [grant number 25820085] and the Tatematsu Foundation.
CR Bertsekas D. P., 1982, COMPUTER SCI APPL MA, V1
BOBROW JE, 1988, IEEE T ROBOTIC AUTOM, V4, P443, DOI 10.1109/56.811
Chettibi T, 2004, EUR J MECH A-SOLID, V23, P703, DOI
10.1016/j.euromechsol.2004.02.006
Divelbiss AW, 1997, IEEE T ROBOTIC AUTOM, V13, P443, DOI 10.1109/70.585905
Du DZ, 1995, MINIMAX APPL NONCONV
FLASH T, 1985, J NEUROSCI, V5, P1688
Garg DP, 2002, ENG APPL ARTIF INTEL, V15, P241, DOI 10.1016/S0952-1976(02)00067-
2
Guarino Lo Bianco C, 2002, INT J CONTROL, V75, P967, DOI
10.1080/00207170210156161
HETTICH R, 1993, SIAM REV, V35, P380, DOI 10.1137/1035089
Jain S, 2009, ACM T GRAPHIC, V28, DOI 10.1145/1477926.1477936
Kim JH, 2011, MECH MACH THEORY, V46, P438, DOI
10.1016/j.mechmachtheory.2010.11.019
Konno A, 2011, INT J ROBOT RES, V30, P1596, DOI 10.1177/0278364911405870
LEE SH, 2009, HUMANOID ROBOTS, P167
Lengagne S, 2013, INT J ROBOT RES, V32, P1104, DOI 10.1177/0278364913478990
Lengagne S, 2011, IEEE T ROBOT, V27, P1095, DOI 10.1109/TRO.2011.2162998
Murray RM, 1994, MATH INTRO ROBOTIC M
Pai YC, 1997, J BIOMECH, V30, P347, DOI 10.1016/S0021-9290(96)00165-0
Piao SH, 2011, INT J ADV ROBOT SYST, V8, P22
Piazzi A, 2000, IEEE T IND ELECTRON, V47, P140, DOI 10.1109/41.824136
PRINCE F, 1994, GAIT POSTURE, V2, P19, DOI 10.1016/0966-6362(94)90013-2
Ren C, 2010, COMPUT GRAPH FORUM, V29, P545, DOI 10.1111/j.1467-8659.2009.01624.x
Simmons G, 2005, J ROBOTIC SYST, V22, P677, DOI 10.1002/rob.20092
Sung CH, 2013, PALADYN J BEHAV ROBO, V4, P22, DOI [10.2478/pjbr-2013-0002, DOI
10.2478/PJBR-2013-0002]
Sung C, 2013, INT J ADV ROBOT SYST, V10, DOI 10.5772/56747
UNO Y, 1989, BIOL CYBERN, V61, P89
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Wang J, 2008, ACM T GRAPHIC, V27, DOI 10.1145/1330511.1330512
Wensing PM, 2013, IEEE INT C INT ROBOT, P5134, DOI 10.1109/IROS.2013.6697099
Zucker M, 2013, INT J ROBOT RES, V32, P1164, DOI 10.1177/0278364913488805
NR 29
TC 1
Z9 1
U1 0
U2 10
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD MAY 3
PY 2015
VL 29
IS 9
SI SI
BP 573
EP 585
DI 10.1080/01691864.2015.1024284
PG 13
WC Robotics
SC Robotics
GA CI9ZA
UT WOS:000355128800001
DA 2018-01-22
ER

PT J
AU Park, G
Konno, A
AF Park, Garam
Konno, Atsushi
TI Imitation learning framework based on principal component analysis
SO ADVANCED ROBOTICS
LA English
DT Article
DE principal component analysis; imitation learning; motion reconstruction;
evolutionary algorithm; humanoid robot
ID HUMANOID ROBOTS; PRIMITIVES; MOVEMENT; MODEL; OPTIMIZATION; TASK
AB In this paper, an imitation learning framework that includes an evolutionary
process based on principal component analysis (PCA) is presented. The framework
comprises offline and online processes. In the offline process, human
demonstrations are used to develop a motion database. The database covers the
workspace and includes robot properties. The evolved database has a clustered
structure for efficiency. In the online process, a robot can generate desired
motions using a real-time motion reconstruction method based on PCA. The
performance of this method is verified through two case studies. The proposed
framework is applied to the generation of reaching motions to an object on a table
and a shelf.
C1 [Park, Garam; Konno, Atsushi] Hokkaido Univ, Grad Sch Informat Sci & Technol,
Sapporo, Hokkaido, Japan.
RP Konno, A (reprint author), Hokkaido Univ, Grad Sch Informat Sci & Technol,
Sapporo, Hokkaido, Japan.
EM konno@scc.ist.hokudai.ac.jp
CR Alissandrakis A, 2007, IEEE T SYST MAN CY B, V37, P299, DOI
10.1109/TSMCB.2006.886947
Billard A., 2008, SPRINGER HDB ROBOTIC, P1371
Calinon S., 2009, ROBOT PROGRAMMING DE
Calinon S, 2007, IEEE T SYST MAN CY B, V37, P286, DOI 10.1109/TSMCB.2006.886952
Calinon S, 2010, IEEE ROBOT AUTOM MAG, V17, P44, DOI 10.1109/MRA.2010.936947
Calinon S, 2009, ADV ROBOTICS, V23, P2059, DOI 10.1163/016918609X12529294461843
Fod A, 2002, AUTON ROBOT, V12, P39, DOI 10.1023/A:1013254724861
Goslin D. A., 1969, HDB SOCIALIZATION TH
Guenter F, 2007, ADV ROBOTICS, V21, P1521
Hastie T., 2001, SPRINGER SERIES STAT
Hersch M, 2008, IEEE T ROBOT, V24, P1463, DOI 10.1109/TRO.2008.2006703
KAWATO M, 1987, BIOL CYBERN, V57, P169, DOI 10.1007/BF00364149
Khansari-Zadeh SM, 2011, IEEE T ROBOT, V27, P943, DOI 10.1109/TRO.2011.2159412
Kim C, 2006, INTEGR COMPUT-AID E, V13, P377
Lee SH, 2005, IEEE T ROBOT, V21, P657, DOI 10.1109/TRO.2004.842336
Mussa-Ivaldi FA, 2000, PHILOS T R SOC B, V355, P1755, DOI 10.1098/rstb.2000.0733
MUSSAIVALDI FA, 1994, P NATL ACAD SCI USA, V91, P7534, DOI
10.1073/pnas.91.16.7534
Nehaniv CL, 1999, INTERDISCIPLINARY AP, P136
Piaget J., 1962, PLAY DREAMS IMITATIO
RAIBERT MH, 1978, BIOL CYBERN, V29, P29, DOI 10.1007/BF00365233
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Schaal S, 2002, APPL INTELL, V17, P49, DOI 10.1023/A:1015727715131
Schmidt R. A, 2011, MOTOR CONTROL LEARNI
Ude A., 1993, Robotics and Autonomous Systems, V11, P113, DOI 10.1016/0921-
8890(93)90015-5
WILLIAMSON MM, 1996, 4 INT C SIM AD BEH, P124
NR 25
TC 0
Z9 0
U1 1
U2 6
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD MAY 3
PY 2015
VL 29
IS 9
SI SI
BP 639
EP 656
DI 10.1080/01691864.2015.1007084
PG 18
WC Robotics
SC Robotics
GA CI9ZF
UT WOS:000355129500002
DA 2018-01-22
ER

PT J
AU Shimizu, M
AF Shimizu, Masayuki
TI Analytical inverse kinematics for 5-DOF humanoid manipulator under
arbitrarily specified unconstrained orientation of end-effector
SO ROBOTICA
LA English
DT Article
DE Inverse kinematics; 5-DOF manipulator; Unconstrained orientation;
Singular configuration; Humanoid robots
ID ROBOT MANIPULATORS; GENERAL 6R; 5R; ARM
AB This paper proposes an analytical method of solving the inverse kinematic
problem for a humanoid manipulator with five degrees-of-freedom (DOF) under the
condition that the target orientation of the manipulator's end-effector is not
constrained around an axis fixed with respect to the environment. Since the number
of the joints is less than six, the inverse kinematic problem cannot be solved for
arbitrarily specified position and orientation of the end-effector. To cope with
the problem, a generalized unconstrained orientation is introduced in this paper.
In addition, this paper conducts the singularity analysis to identify all singular
conditions.
C1 Shizuoka Univ, Dept Mech Engn, Hamamatsu, Shizuoka, Japan.
RP Shimizu, M (reprint author), Shizuoka Univ, Dept Mech Engn, Hamamatsu, Shizuoka,
Japan.
EM tmsimiz@ipc.shizuoka.ac.jp
CR ANGELES J, 1986, INT J ROBOT RES, V4, P59, DOI 10.1177/027836498600400405
Bedrossian N. S., 1990, Proceedings 1990 IEEE International Conference on
Robotics and Automation (Cat. No.90CH2876-1), P818, DOI 10.1109/ROBOT.1990.126089
Chen IM, 2001, IEEE INT CONF ROBOT, P2395, DOI 10.1109/ROBOT.2001.932980
Gan JQ, 2005, ROBOTICA, V23, P123, DOI 10.1017/S0263574704000529
Goel P., 1988, P IEEE INT C ROB AUT, V3, P1688
Jazar R., 2010, THEORY APPL ROBOTICS
KOHLI D, 1993, J MECH DESIGN, V115, P922, DOI 10.1115/1.2919288
Lee HY, 1996, J MECH DESIGN, V118, P396, DOI 10.1115/1.2826899
MANOCHA D, 1994, IEEE T ROBOTIC AUTOM, V10, P648, DOI 10.1109/70.326569
MANSEUR R, 1992, MECH MACH THEORY, V27, P575, DOI 10.1016/0094-114X(92)90046-K
MANSEUR R, 1992, MECH MACH THEORY, V27, P587, DOI 10.1016/0094-114X(92)90047-L
The Mathematical Society of Japan, 1993, ENCY DICT MATH, V2nd
McCarthy J. M., 1990, INTRO THEORETICAL KI
RAGHAVAN M, 1993, J MECH DESIGN, V115, P502, DOI 10.1115/1.2919218
Ramirez-Torres J. G., 2010, P 2010 EL ROB AUT ME, P372
Sato F, 2011, IEEE INT C INT ROBOT, P3179, DOI 10.1109/IROS.2011.6048182
Shimizu M, 2008, IEEE T ROBOT, V24, P1131, DOI 10.1109/TRO.2008.2003266
SUGIMOTO K, 1983, J MECH TRANSM-T ASME, V105, P23, DOI 10.1115/1.3267339
Tsai L.-W., 1985, Transactions of the ASME. Journal of Mechanisms,
Transmissions, and Automation in Design, V107, P189
WAMPLER C, 1991, MECH MACH THEORY, V26, P91, DOI 10.1016/0094-114X(91)90024-X
Wang HB, 1998, MECH MACH THEORY, V33, P895, DOI 10.1016/S0094-114X(97)00067-0
Wang H., 2007, P 2007 IEEE INT C NE, P507
Xin SZ, 2007, J MECH DESIGN, V129, P793, DOI 10.1115/1.2735636
Xu D, 2005, INT J AUTOMATION COM, V2, P114
ZHOU YB, 1995, MECH MACH THEORY, V30, P421, DOI 10.1016/0094-114X(94)00030-O
Zhou YB, 1998, MECH MACH THEORY, V33, P175, DOI 10.1016/S0094-114X(97)00018-9
NR 26
TC 1
Z9 1
U1 3
U2 13
PU CAMBRIDGE UNIV PRESS
PI NEW YORK
PA 32 AVENUE OF THE AMERICAS, NEW YORK, NY 10013-2473 USA
SN 0263-5747
EI 1469-8668
J9 ROBOTICA
JI Robotica
PD MAY
PY 2015
VL 33
IS 4
BP 747
EP 767
DI 10.1017/S0263574714000538
PG 21
WC Robotics
SC Robotics
GA CG9SO
UT WOS:000353660600003
OA gold
DA 2018-01-22
ER

PT J
AU Shimizu, T
Saegusa, R
Ikemoto, S
Ishiguro, H
Metta, G
AF Shimizu, Toshihiko
Saegusa, Ryo
Ikemoto, Shuhei
Ishiguro, Hiroshi
Metta, Giorgio
TI Robust Sensorimotor Representation to Physical Interaction Changes in
Humanoid Motion Learning
SO IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
LA English
DT Article
DE Change detection; dimensionality reduction; learning from demonstration
(LfD); physical human-robot interaction
ID ROBOT; ADAPTATION; WALKING
AB This paper proposes a learning from demonstration system based on a motion
feature, called phase transfer sequence. The system aims to synthesize the
knowledge on humanoid whole body motions learned during teacher-supported
interactions, and apply this knowledge during different physical interactions
between a robot and its surroundings. The phase transfer sequence represents the
temporal order of the changing points in multiple time sequences. It encodes the
dynamical aspects of the sequences so as to absorb the gaps in timing and amplitude
derived from interaction changes. The phase transfer sequence was evaluated in
reinforcement learning of sitting-up and walking motions conducted by a real
humanoid robot and compatible simulator. In both tasks, the robotic motions were
less dependent on physical interactions when learned by the proposed feature than
by conventional similarity measurements. Phase transfer sequence also enhanced the
convergence speed of motion learning. Our proposed feature is original primarily
because it absorbs the gaps caused by changes of the originally acquired physical
interactions, thereby enhancing the learning speed in subsequent interactions.
C1 [Shimizu, Toshihiko] Kobe City Coll Technol, Dept Mech Engn, Kobe, Hyogo
6512194, Japan.
[Saegusa, Ryo] Toyohashi Univ Technol, Ctr Human Robot Symbiosis Res, Toyohashi,
Aichi 4418580, Japan.
[Ikemoto, Shuhei] Osaka Univ, Inst Acad Initiat, Toyonaka, Osaka 5608531, Japan.
[Ishiguro, Hiroshi] Osaka Univ, Grad Sch Engn Sci, Dept Syst Innovat, Toyonaka,
Osaka 5608531, Japan.
[Metta, Giorgio] Ist Italiano Tecnol, Dept Robot Brain & Cognit Sci, I-16163
Genoa, Italy.
RP Shimizu, T (reprint author), Kobe City Coll Technol, Dept Mech Engn, Kobe, Hyogo
6512194, Japan.
EM ts8@kobe-kosen.ac.jp; ryos@ieee.org; ikemoto@arl.sys.es.osaka-u.ac.jp;
ishiguro@sys.es.osaka-u.ac.jp; giorgio.metta@iit.it
FU Department of Robotics, Brain and Cognitive Sciences, Istituto Italiano
di Tecnologia, Genoa, Italy; Japan Society for the Promotion of Science
(JSPS) [2457062]; JSPS; Advanced Research Network; European Union (EU);
Cooperative Human Robot Interaction Systems [FP7 215805]; EU [FP7 97459]
FX Manuscript received May 14, 2012; revised June 26, 2013 and December 7,
2013; accepted June 23, 2014. Date of publication July 10, 2014; date of
current version April 15, 2015. This work was supported in part by the
Department of Robotics, Brain and Cognitive Sciences, Istituto Italiano
di Tecnologia, Genoa, Italy, in part by the Grants-in-Aid for Scientific
Research, Japan Society for the Promotion of Science (JSPS), under Grant
2457062, in part by the JSPS Core-to-Core Program A, in part by the
Advanced Research Network, in part by the European Union (EU) FP7
Project, in part by the Cooperative Human Robot Interaction Systems
under Grant FP7 215805, and in part by the EU FP7 Project Xperience
entitled the Robots Bootstrapped through Learning and Experience under
Grant FP7 97459.
CR Bergroth L, 2000, SPIRE 2000: SEVENTH INTERNATIONAL SYMPOSIUM ON STRING
PROCESSING AND INFORMATION RETRIEVAL - PROCEEDINGS, P39, DOI
10.1109/SPIRE.2000.878178
Buchli J, 2011, INT J ROBOT RES, V30, P820, DOI 10.1177/0278364911402527
Calinon S, 2007, IEEE T SYST MAN CY B, V37, P286, DOI 10.1109/TSMCB.2006.886952
Hosoda K., 2010, P IEEE 9 INT C DEV L, P317
Huang Q, 2005, IEEE T ROBOT, V21, P977, DOI 10.1109/TRO.2005.851381
Ide T, 2005, SIAM PROC S, P571
Ijspeert A. J., 2002, ADV NEURAL INFORM PR, V15, P1523
Ikemoto S, 2012, IEEE ROBOT AUTOM MAG, V19, P24, DOI 10.1109/MRA.2011.2181676
Kruger V, 2010, IEEE ROBOT AUTOM MAG, V17, P30, DOI 10.1109/MRA.2010.936961
Kuniyoshi Y, 2004, ROBOT AUTON SYST, V48, P189, DOI 10.1016/j.robot.2004.07.004
Kuwayama K, 2004, PROCEEDINGS OF THE 2004 INTERNATIONAL SYMPOSIUM ON MICRO-
NANOMECHATRONICS AND HUMAN SCIENCE, P157
Metta G., 2006, International Journal of Advanced Robotic Systems, V3, P43
Metta G., 2008, P 8 WORKSH PERF METR, P50, DOI DOI 10.1145/1774674.1774683
Mohammad Y, 2009, LECT NOTES ARTIF INT, V5579, P123, DOI 10.1007/978-3-642-
02568-6_13
Morimoto J, 2001, ROBOT AUTON SYST, V36, P37, DOI 10.1016/S0921-8890(01)00113-0
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Nakaoka S, 2003, IEEE INT CONF ROBOT, P3905
Nicolescu M, 2003, P 2 INT JOINT C AUT, P241
Park CH, 2010, IEEE INT CONF ROBOT, P229, DOI 10.1109/ROBOT.2010.5509160
Ulrich DA, 2001, PEDIATRICS, V108, part. no., DOI 10.1542/peds.108.5.e84
Vukobratovic M, 2006, INT J HUM ROBOT, V3, P153, DOI 10.1142/S0219843606000710
Wang QG, 2011, IEEE T KNOWL DATA EN, V23, P321, DOI 10.1109/TKDE.2010.123
WATKINS CJCH, 1992, MACH LEARN, V8, P279, DOI 10.1023/A:1022676722315
ZELAZO PR, 1972, SCIENCE, V176, P314, DOI 10.1126/science.176.4032.314
NR 24
TC 2
Z9 2
U1 1
U2 11
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 2162-237X
EI 2162-2388
J9 IEEE T NEUR NET LEAR
JI IEEE Trans. Neural Netw. Learn. Syst.
PD MAY
PY 2015
VL 26
IS 5
BP 1035
EP 1047
DI 10.1109/TNNLS.2014.2333092
PG 13
WC Computer Science, Artificial Intelligence; Computer Science, Hardware &
Architecture; Computer Science, Theory & Methods; Engineering,
Electrical & Electronic
SC Computer Science; Engineering
GA CG2RM
UT WOS:000353122400011
PM 25029488
DA 2018-01-22
ER

PT J
AU Chen, XP
Huang, Q
Wan, WW
Zhou, ML
Yu, ZG
Zhang, WM
Yasin, A
Bao, H
Meng, F
AF Chen, Xiaopeng
Huang, Qiang
Wan, Weiwei
Zhou, Mingliang
Yu, Zhangguo
Zhang, Weimin
Yasin, Awais
Bao, Han
Meng, Fei
TI A Robust Vision Module for Humanoid Robotic Ping-Pong Game
SO INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
LA English
DT Article
DE Robotic ping-pong; precise and high-speed vision module; non-linear
rebound model; trajectory prediction; humanoid robot
ID TABLE-TENNIS; SPINNING BALL; PLAYING ROBOT; TRACKING; PLAYER; PREDICTION
AB Developing a vision module for a humanoid ping-pong game is challenging due to
the spin and the non-linear rebound of the ping-pong ball. In this paper, we
present a robust predictive vision module to overcome these problems. The hardware
of the vision module is composed of two stereo camera pairs with each pair
detecting the 3D positions of the ball on one half of the ping-pong table. The
software of the vision module divides the trajectory of the ball into four parts
and uses the perceived trajectory in the first part to predict the other parts. In
particular, the software of the vision module uses an aerodynamic model to predict
the trajectories of the ball in the air and uses a novel non-linear rebound model
to predict the change of the ball's motion during rebound. The average prediction
error of our vision module at the ball returning point is less than 50 mm - a value
small enough for standard sized ping-pong rackets. Its average processing speed is
120fps. The precision and efficiency of our vision module enables two humanoid
robots to play ping-pong continuously for more than 200 rounds.
C1 [Chen, Xiaopeng; Huang, Qiang; Zhou, Mingliang; Yu, Zhangguo; Zhang, Weimin;
Meng, Fei] Beijing Inst Technol, Beijing 100081, Peoples R China.
[Wan, Weiwei] Univ Tokyo, Tokyo, Japan.
[Yasin, Awais] Univ Engn & Technol, Lahore, Pakistan.
[Bao, Han] Natl Res Ctr Intelligence Equipment Agr, Beijing, Peoples R China.
RP Chen, XP (reprint author), Beijing Inst Technol, Beijing 100081, Peoples R
China.
EM xpchen@bit.edu.cn
OI YU, Zhangguo/0000-0003-0041-8100
FU National High Technology Research and Development Program (863 Project)
[2008AA042601, 2014AA041602, 2015AA042305]; fundamental research fund of
Beijing Institute of Technology [20140242014]; 111 Project [B08043];
China Scholarship Council [2011307350]
FX This work was supported by the National High Technology Research and
Development Program (863 Project) under Grant 2008AA042601,
2014AA041602, 2015AA042305, the fundamental research fund of Beijing
Institute of Technology under Grant 20140242014, the "111 Project" under
Grant B08043, the China Scholarship Council (File No. 2011307350).
CR Acosta L, 2003, IEEE ROBOT AUTOM MAG, V10, P44, DOI 10.1109/MRA.2003.1256297
Adair RK, 1990, AM J PHYS, V58, P1117
ANDERSSON RL, 1989, IEEE T ROBOTIC AUTOM, V5, P728, DOI 10.1109/70.88095
Andersson R. L., 1990, IEEE, V1, P165
Angel L, 2008, LECT NOTES CONTR INF, V370, P229
Billingsley J, 1983, ROBOT RING PONG PRAC
Chen X., 2010, ROB BIOM ROBIO 2010, P603
Cross R, 2005, AM J PHYS, V73, P914, DOI 10.1119/1.2008299
Cross R, 2002, AM J PHYS, V70, P482, DOI 10.1119/1.1450571
FASSLER H, 1990, ROBOTERSYSTEME, V6, P161
Griffiths I, 2005, MEAS SCI TECHNOL, V16, P2056, DOI 10.1088/0957-0233/16/10/022
Guo D, 1996, ZHEJIANG SPORTS SCI, V18, P43
Hashimoto H, 1987, ROB IECON87 C, P608
Huang YL, 2011, IEEE INT C INT ROBOT, P3434, DOI 10.1109/IROS.2011.6048803
Jahne B., 1997, DIGITAL IMAGE PROCES
KNIGHT J, 1986, MICROPROCESS MICROSY, V10, P332, DOI 10.1016/0141-9331(86)90273-
5
Lai C. H., 2010, Journal of Computer Sciences, V6, P946, DOI
10.3844/jcssp.2010.946.954
Lampert CH, 2012, J REAL-TIME IMAGE PR, V7, P31, DOI 10.1007/s11554-010-0168-3
Li H, 2012, CONTR AUT ROB VIS 20, P2134
Liu H, 2013, ROB BIOM ROBIO 2013, P2430
Matsushima M, 2005, IEEE T ROBOT, V21, P767, DOI 10.1109/TRO.2005.844689
Matsushima M, 2003, IEEE SYS MAN CYBERN, P2962
Meng G, 1996, J NE HEAVY MACHINERY, V20, P80
Modi KP, 2005, IEEE SYS MAN CYBERN, P1831
Mulling K, 2011, ADAPT BEHAV, V19, P359, DOI 10.1177/1059712311419378
NAGHDY F, 1994, TRANSPUT OCCAM ENG S, V37, P311
Nakashima A, 2010, P AMER CONTR CONF, P1410
ORAZIO TD, 2004, PATTERN RECOGN, V37, P393
Pan H, 1995, ZHEJIANG SPORTS SCI, V17, P16
Peng B, 2007, J JIANGNAN U NATURAL, V6, P433
Pingali GS, 1998, PROC CVPR IEEE, P260, DOI 10.1109/CVPR.1998.698618
RUI Q, 1998, ROBOT, V20, P373
Rusdorf S, 2007, IEEE T VIS COMPUT GR, V13, P15, DOI 10.1109/TVCG.2007.18
Silva L. A., 2005, IROS 2005, P2134
Sun Y, 2011, HUM ROB HUM 2011 11, P19
Tian JD, 2011, J INTELL ROBOT SYST, V64, P543, DOI 10.1007/s10846-011-9554-8
Tong XF, 2004, INT C PATT RECOG, P795, DOI 10.1109/ICPR.2004.1333892
Trasloheros A, 2014, INT J ADV ROBOT SYST, V11, DOI 10.5772/58526
Yang P, 2010, I C CONT AUTOMAT ROB, P1731, DOI 10.1109/ICARCV.2010.5707252
Yu Z., 2014, ADV MECH ENG, P1
Zhang YH, 2011, J ZHEJIANG U-SCI C, V12, P110, DOI 10.1631/jzus.C0910528
Zhang Z, 2010, ROB BIOM ROBIO 2010, P376
Zhang ZT, 2008, 2008 7TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION,
VOLS 1-23, P4881, DOI 10.1109/WCICA.2008.4593715
Zhang ZT, 2010, IEEE T INSTRUM MEAS, V59, P3195, DOI 10.1109/TIM.2010.2047128
NR 44
TC 0
Z9 0
U1 0
U2 23
PU INTECH EUROPE
PI RIJEKA
PA JANEZA TRDINE 9, RIJEKA, 51000, CROATIA
SN 1729-8806
EI 1729-8814
J9 INT J ADV ROBOT SYST
JI Int. J. Adv. Robot. Syst.
PD APR 14
PY 2015
VL 12
AR 35
DI 10.5772/60406
PG 14
WC Robotics
SC Robotics
GA CF9JW
UT WOS:000352882000001
OA gold
DA 2018-01-22
ER

PT J
AU Matsumura, R
Shiomi, M
Miyashita, T
Ishiguro, H
Hagita, N
AF Matsumura, Reo
Shiomi, Masahiro
Miyashita, Takahiro
Ishiguro, Hiroshi
Hagita, Norihiro
TI What kind of floor am I standing on? Floor surface identification by a
small humanoid robot through full-body motions
SO ADVANCED ROBOTICS
LA English
DT Article
DE floor identification; small humanoid robot
ID TERRAIN CLASSIFICATION
AB This study addresses a floor identification method for small humanoid robots
that work in such daily environments as homes. The fundamental difficulty lays in a
method to understand the physical properties of floors. To achieve floor
identification with small humanoid robots, we used inertial sensors that can be
easily installed on such robots, and dynamically selected a full-body motion that
physically senses floors to achieve accurate floor identification. We collected a
training data-set over 10 different kinds of common floors in home environments. We
achieved 85.7% precision with our proposed method. We also demonstrate that our
robot could appropriately change its locomotion behaviours depending on the floor
identification results.
C1 [Matsumura, Reo; Shiomi, Masahiro; Miyashita, Takahiro; Ishiguro, Hiroshi;
Hagita, Norihiro] ATR IRC, Kyoto, Japan.
[Matsumura, Reo] Univ Tokyo, Adv Sci & Technol Res Ctr, Tokyo, Japan.
[Ishiguro, Hiroshi] Osaka Univ, Grad Sch Engn Sci, Osaka, Japan.
RP Matsumura, R (reprint author), ATR IRC, Kyoto, Japan.
EM reo.matsumura@gmail.com
OI SHIOMI, Masahiro/0000-0003-4338-801X
FU Strategic Information and Communications R&D Promotion Programme
(SCOPE), Ministry of Internal Affairs and Communications [132107010]
FX This research was supported by the Strategic Information and
Communications R&D Promotion Programme (SCOPE), Ministry of Internal
Affairs and Communications [grant number 132107010].
CR Cooney MD, 2011, HUMANOIDS, V2011, P112
Coyle E, 2010, IEEE INT CONF ROBOT, P4417, DOI 10.1109/ROBOT.2010.5509870
DuPont EM, 2008, AUTON ROBOT, V24, P337, DOI 10.1007/s10514-007-9077-0
Fend M, 2005, LECT NOTES ARTIF INT, V3630, P302
Fend M., 2003, IEEE RSJ INT C INT R, V2, P1044
Halatci I, 2008, ROBOTICA, V26, P767, DOI 10.1017/S0263574708004360
Kendon A., 1990, CONDUCTING INTERACTI
Nohara Y., 2010, IEEE RSJ INT C INT R, P1030
Quinlan J. R., 1993, PROGRAMS MACHINE LEA
Vapnik V.N., 2000, NATURE STAT LEARNING, P988, DOI DOI 10.1007/978-1-4757-2440-0
Wada K, 2007, IEEE T ROBOT, V23, P972, DOI 10.1109/TRO.2007.906261
Wang YT, 2010, INT J ADV ROBOT SYST, V7, P133
NR 12
TC 1
Z9 1
U1 1
U2 4
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD APR 3
PY 2015
VL 29
IS 7
SI SI
BP 469
EP 480
DI 10.1080/01691864.2014.996601
PG 12
WC Robotics
SC Robotics
GA CG7UO
UT WOS:000353511200003
DA 2018-01-22
ER

PT J
AU Takano, W
Nakamura, Y
AF Takano, Wataru
Nakamura, Yoshihiko
TI Symbolically structured database for human whole body motions based on
association between motion symbols and motion words
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Motion database; Motion recognition/generation; Hidden Markov Model;
Motion symbol; Motion words
ID IMITATION; ROBOT; RECOGNITION; MODELS
AB Motion capture systems have been commonly used to enable humanoid robots or CG
characters to perform human-like motions. However, prerecorded motion capture data
cannot be reused efficiently because picking a specific motion from a large
database and modifying the motion data to fit the desired motion patterns are
difficult tasks. We have developed an imitative learning framework based on the
symbolization of motion patterns using Hidden Markov Models (HMMs), where each HMM
(hereafter referred to as "motion symbol") abstracts the dynamics of a motion
pattern and allows motion recognition and generation. This paper describes a
symbolically structured motion database that consists of original motion data,
motion symbols, and motion words. Each motion data is labeled with motion symbols
and motion words. Moreover, a network is formed between two layers of motion
symbols and motion words based on their probability association. This network makes
it possible to associate motion symbols with motion words and to search for motion
datasets using motion symbols. The motion symbols can also generate motion data.
Therefore, the developed framework can provide the desired motion data when only
the motion words are input into the database. (C) 2015 The Authors. Published by
Elsevier B.V.
C1 [Takano, Wataru; Nakamura, Yoshihiko] Univ Tokyo, Mechanoinformat, Bunkyo Ku,
Tokyo 1138656, Japan.
RP Takano, W (reprint author), Univ Tokyo, Mechanoinformat, Bunkyo Ku, 7-3-1 Hongo,
Tokyo 1138656, Japan.
EM takano@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102
FU Ministry of Education, Culture, Sports, Science and Technology
[19024024]; Japan Society for the Promotion of Science [26700021]
FX This research was supported by "The i-Explosion Project" of the
Grant-in-Aid for Scientific Research on Priority Areas (19024024), the
Ministry of Education, Culture, Sports, Science and Technology, and
Grant-in-Aid for Young Scientists (A) (26700021), Japan Society for the
Promotion of Science. We would like to thank Atsushi Shiozawa, Toshio
Murata, and Sumio Ito of Bandai Namco Corporation for letting us use
their motion capture dataset.
CR Aarno D, 2008, ROBOT AUTON SYST, V56, P692, DOI 10.1016/j.robot.2007.11.005
Arie H, 2012, ROBOT AUTON SYST, V60, P729, DOI 10.1016/j.robot.2011.11.005
Arikan O, 2003, ACM T GRAPHIC, V22, P402, DOI 10.1145/882262.882284
Arikan O, 2002, ACM T GRAPHIC, V21, P483
Asfour T., 2006, P IEEE RAS INT C HUM, P40
Billard AG, 2006, ROBOT AUTON SYST, V54, P370, DOI 10.1016/j.robot.2006.01.007
Bowden R., 2000, IEEE WORKSH HUM MOD, P10
Brown P. F., 1993, Computational Linguistics, V19, P263
HART PE, 1968, IEEE T SYST SCI CYB, VSSC4, P100, DOI 10.1109/TSSC.1968.300136
Haruno M, 2001, NEURAL COMPUT, V13, P2201, DOI 10.1162/089976601750541778
Ijspeert A. J., 2003, NEURAL INFORM PROCES, V15, P1547
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Inamura T., 2003, P IEEE RAS INT C HUM, p1b
Inamura T, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P334, DOI
10.1109/IROS.2008.4651194
KOVAR L, 2002, ACM T GRAPH, V21
Lee JH, 2002, ACM T GRAPHIC, V21, P491
Li Y, 2002, ACM T GRAPHIC, V21, P465
Nakaoka S, 2003, IEEE INT CONF ROBOT, P3905
Ogata T., 2007, P IEEE RSJ INT C INT
Okada M, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1410, DOI 10.1109/ROBOT.2002.1014741
Pollard N S, 2002, P IEEE INT C ROB AUT, V2, P1390
Pullen K, 2002, ACM T GRAPHIC, V21, P501
RABINER LR, 1989, P IEEE, V77, P257, DOI 10.1109/5.18626
Riley M, 2003, IEEE INT CONF ROBOT, P2368, DOI 10.1109/ROBOT.2003.1241947
Rose C, 1998, IEEE COMPUT GRAPH, V18, P32, DOI 10.1109/38.708559
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Sidenbladh H, 2002, LECT NOTES COMPUT SC, V2350, P784
Sugita Y, 2005, ADAPT BEHAV, V13, P33, DOI 10.1177/105971230501300102
TAKANE Y, 1977, PSYCHOMETRIKA, V42, P7, DOI 10.1007/BF02293745
TAKANO W, 2006, P IEEE RAS INT C HUM, P425
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
Witkin A., 1995, P 22 ANN C COMP GRAP, P105, DOI DOI 10.1145/218380.218422
NR 32
TC 3
Z9 3
U1 0
U2 4
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD APR
PY 2015
VL 66
BP 75
EP 85
DI 10.1016/j.robot.2014.12.008
PG 11
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA CD2TI
UT WOS:000350931500008
OA gold
DA 2018-01-22
ER

PT J
AU Van Heerden, K
Kawamura, A
AF Van Heerden, K.
Kawamura, A.
TI Humanoid robot trajectory generator for safe haptic arm control via arm
wrench limitation, trajectory compensation and reactive stepping
SO ADVANCED ROBOTICS
LA English
DT Article
DE trajectory generation; bilateral control; humanoid robot
ID WALKING PATTERN GENERATION; PREVIEW CONTROL; MANIPULATION; TERRAIN
AB An optimal trajectory planner for humanoid robots with a haptically controlled
arm is proposed. This trajectory planner simultaneously plans time-varying arm
wrench constraints, foot step positions, and a Center-of-Mass (COM) trajectory.
This system has the useful characteristic of ensuring that the robot does not fall
when the human operator commands an excessively large arm wrench and in the long
term the resultant arm wrench approaches the desired arm wrench via a combination
of changes in the foot positions and the COM position. This is accomplished via a
Model Predictive Controller with a Quadratic Programming-based trajectory optimizer
that calculates a time series of arm wrench constraints, foot positions and, a COM
trajectory that optimizes the criteria of minimizing the tracking error of the
desired foot positions and the desired arm wrench. A simulation and experiments
demonstrate the validity of the proposed algorithm.
C1 [Van Heerden, K.] Ritsumeikan Univ, Robot Dept, Kusatsu, Japan.
[Kawamura, A.] Yokohama Natl Univ, Dept Phys Elect Comp Engn, Yokohama, Kanagawa
240, Japan.
RP Van Heerden, K (reprint author), Ritsumeikan Univ, Robot Dept, Kusatsu, Japan.
EM kirillvh@ieee.org
RI kawamura, atsuo/L-8158-2014
OI kawamura, atsuo/0000-0002-3085-2314
CR Bouyarmane K, 2012, ADV ROBOTICS, V26, P1099, DOI 10.1080/01691864.2012.686345
Buschmann T, 2007, IEEE-RAS INT C HUMAN, P1, DOI 10.1109/ICHR.2007.4813841
DIEDAM H, 2008, IEEE RSJ INT C INT R, P1121
EVRARD P, 2009, IROS, P5635
HARADA K, 2003, P IEEE RSJ INT C INT, P73
Harada K, 2007, IEEE-ASME T MECH, V12, P53, DOI 10.1109/TMECH.2006.886254
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hirukawa H, 2007, IEEE INT CONF ROBOT, P2181, DOI 10.1109/ROBOT.2007.363644
HWANG Y, 2003, P IEEE RSJ INT C INT, V2, P1901
Hyon SH, 2009, IEEE T ROBOT, V25, P171, DOI 10.1109/TRO.2008.2006870
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kajita S., 2010, IEEE RSJ INT C INT R, P4489
KAJITA S, 1991, IEEE INT C ROB AUT S, P1405
Kanehiro F, 2014, ADV ROBOTICS, V28, P433, DOI 10.1080/01691864.2013.876931
LYNCH KM, 1992, 1992 IEEE INTERNATIONAL CONF ON ROBOTICS AND AUTOMATION :
PROCEEDINGS, VOLS 1-3, P2269, DOI 10.1109/ROBOT.1992.219921
Matsumoto Y, 2003, P IEEE INT C IND TEC, V2, P802
Morisawa M, 2006, IEEE-RAS INT C HUMAN, P581, DOI 10.1109/ICHR.2006.321332
Nishiwaki K, 2006, IEEE INT CONF ROBOT, P2667, DOI 10.1109/ROBOT.2006.1642104
Park IW, 2008, ADV ROBOTICS, V22, P159, DOI 10.1163/156855308X292538
Stasse O, 2009, IEEE RAS INT C HUM R, P284
Stephens B. J., 2011, THESIS CARNEGIE MELL
Tsuji T, 2004, IEEE C IND TECHN, P96
Wieber P.-B., 2006, IEEE RAS INT C HUM R, P137, DOI [DOI
10.1109/ICHR.2006.321375, 10.1109/ICHR.2006.321375]
NR 23
TC 1
Z9 1
U1 2
U2 10
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD MAR 19
PY 2015
VL 29
IS 6
BP 413
EP 429
DI 10.1080/01691864.2014.987166
PG 17
WC Robotics
SC Robotics
GA CD9WL
UT WOS:000351450800001
DA 2018-01-22
ER

PT J
AU Morimoto, J
Kawato, M
AF Morimoto, Jun
Kawato, Mitsuo
TI Creating the brain and interacting with the brain: an integrated
approach to understanding the brain
SO JOURNAL OF THE ROYAL SOCIETY INTERFACE
LA English
DT Review
DE computational neuroscience; humanoid robot; exoskeleton robot;
neurofeedback; motor learning
ID COUPLED OSCILLATOR MODEL; POLICY GRADIENT-METHOD; BIPED LOCOMOTION;
COMPUTER INTERFACE; HUMANOID ROBOTS; DEVELOPMENTAL ROBOTICS; SINUSOIDAL
PATTERNS; EXOSKELETON ROBOT; CAUDATE-NUCLEUS; HUMAN MOVEMENTS
AB In the past two decades, brain science and robotics have made gigantic advances
in their own fields, and their interactions have generated several
interdisciplinary research fields. First, in the 'understanding the brain by
creating the brain' approach, computational neuroscience models have been applied
to many robotics problems. Second, such brain-motivated fields as cognitive
robotics and developmental robotics have emerged as interdisciplinary areas among
robotics, neuroscience and cognitive science with special emphasis on humanoid
robots. Third, in brain-machine interface research, a brain and a robot are
mutually connected within a closed loop. In this paper, we review the theoretical
backgrounds of these three interdisciplinary fields and their recent progress.
Then, we introduce recent efforts to reintegrate these research fields into a
coherent perspective and propose a new direction that integrates brain science and
robotics where the decoding of information from the brain, robot control based on
the decoded information and multimodal feedback to the brain from the robot are
carried out in real time and in a closed loop.
C1 [Morimoto, Jun] ATR Computat Neurosci Labs, Dept Brain Robot Interface, Kyoto,
Japan.
[Kawato, Mitsuo] ATR Brain Informat Commun Res Lab Grp, Kyoto, Japan.
RP Morimoto, J (reprint author), ATR Computat Neurosci Labs, Dept Brain Robot
Interface, Kyoto, Japan.
EM xmorimo@atr.jp; kawato@atr.jp
FU ERATO; ICORP; MIC-SCOPE; SICP; KAKENHI [2312004]; Ministry of Internal
Affairs and Communications entitled 'Novel and innovative R&D making use
of brain structures'
FX It was partially supported by ERATO, J.S.T.; by ICORP, J.S.T.; by
MIC-SCOPE; by SICP, J.S.T.; by KAKENHI 2312004; and by contract with the
Ministry of Internal Affairs and Communications entitled 'Novel and
innovative R&D making use of brain structures'.
CR Amano K, 2014, SOC NEUR M WASH DC 1
AMARI SI, 1977, BIOL CYBERN, V27, P77, DOI 10.1007/BF00337259
Ariki Y, 2013, NEURAL NETWORKS, V40, P32, DOI 10.1016/j.neunet.2013.01.002
Asada M, 2009, IEEE T AUTON MENT DE, V1, P12, DOI 10.1109/TAMD.2009.2021702
Astrom K., 1989, ADAPTIVE CONTROL
Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
BASMAJIAN JV, 1963, SCIENCE, V141, P440, DOI 10.1126/science.141.3579.440
Bentivegna DC, 2004, ROBOT AUTON SYST, V47, P163, DOI
10.1016/j.robot.2004.03.010
BENTIVEGNA DC, 2004, INT J HUM ROBOT, V1, P585, DOI DOI
10.1142/S0219843604000307
Bertsekas D.P., 2005, DYNAMIC PROGRAMMING
Birbaumer N, 2007, J PHYSIOL-LONDON, V579, P621, DOI
10.1113/jphysiol.2006.125633
Blankertz B, 2008, IEEE SIGNAL PROC MAG, V25, P41, DOI 10.1109/MSP.200790.900,9
Calinon S, 2010, IEEE ROBOT AUTOM MAG, V17, P44, DOI 10.1109/MRA.2010.936947
Cassim F, 2001, NEUROREPORT, V12, P3859, DOI 10.1097/00001756-200112040-00051
Chavarriaga R, 2010, IEEE T NEUR SYS REH, V18, P381, DOI
10.1109/TNSRE.2010.2053387
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Cheron G, 2012, NEURAL PLAST, DOI 10.1155/2012/375148
deCharms RC, 2005, P NATL ACAD SCI USA, V102, P18626, DOI
10.1073/pnas.0505210102
Dollar AM, 2008, IEEE T ROBOT, V24, P144, DOI 10.1109/TRO.2008.915453
Doya K, 1999, NEURAL NETWORKS, V12, P961, DOI 10.1016/S0893-6080(99)00046-5
Doya K, 2000, NEURAL COMPUT, V12, P219, DOI 10.1162/089976600300015961
Endo G, 2008, INT J ROBOT RES, V27, P213, DOI 10.1177/0278364907084980
FETZ EE, 1969, SCIENCE, V163, P955, DOI 10.1126/science.163.3870.955
Fukuda M, 2011, SOC NEUR M WASH DC 1
Fukumizu K, 2004, J MACH LEARN RES, V5, P73
FUKUSHIMA K, 1980, BIOL CYBERN, V36, P193, DOI 10.1007/BF00344251
Gazzaniga M., 2013, COGNITIVE NEUROSCIEN
Ghahramani Z, 2000, NEURAL COMPUT, V12, P831, DOI 10.1162/089976600300015619
GRILLNER S, 1985, SCIENCE, V228, P143, DOI 10.1126/science.3975635
Guger C, 2000, IEEE T REHABIL ENG, V8, P447, DOI 10.1109/86.895947
Hale JG, 2005, IEEE T SYST MAN CY C, V35, P512, DOI 10.1109/TSMCC.2004.840063
Hampton AN, 2006, J NEUROSCI, V26, P8360, DOI 10.1523/JNEUROSCI.1010-06.2006
Haruno M, 2001, NEURAL COMPUT, V13, P2201, DOI 10.1162/089976601750541778
Haruno M, 2006, J NEUROPHYSIOL, V95, P948, DOI 10.1152/jn.00382.2005
Haruno M, 2004, J NEUROSCI, V24, P1660, DOI 10.1523/JNEUROSCI.3417-03.2004
Haruno M, 2006, NEURAL NETWORKS, V19, P1242, DOI 10.1016/j.neunet.2006.06.007
Hochberg LR, 2006, NATURE, V442, P164, DOI 10.1038/nature04970
Hoellinger T, 2013, FRONT COMPUT NEUROSC, V7, DOI 10.3389/fncom.2013.00070
Houk JC, 1995, MODELS INFORM PROCES
Hyon SH, 2011, IEEE INT C INT ROBOT, P3975, DOI 10.1109/IROS.2011.6048840
Ijspeert AJ, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS I-IV, PROCEEDINGS, P1398, DOI 10.1109/ROBOT.2002.1014739
Ijspeert AJ, 2007, SCIENCE, V315, P1416, DOI 10.1126/science.1138353
Kagawa T., 2009, 18 IEEE INT S ROB HU, P633
Kajita S, 2003, ADV ROBOTICS, V17, P131, DOI 10.1163/156855303321165097
Kajita S, 2014, INTRO HUMANIOD ROBOT
Kamatani D., 2011, NEUR 2010 OS JAP 2 4, pP2
Kamitani Y, 2005, NAT NEUROSCI, V8, P679, DOI 10.1038/nn1444
Kandel E. R., 2012, PRINCIPLES NEURAL SC
KAWATO M, 1992, BIOL CYBERN, V68, P95, DOI 10.1007/BF00201431
Kawato M, 2013, WORLD C BIOL PSYCH K
Kawato M, 2008, PHILOS T R SOC B, V363, P2201, DOI 10.1098/rstb.2008.2272
Kawato M, 2007, CURR OPIN NEUROBIOL, V17, P205, DOI 10.1016/j.conb.2007.03.004
Kennerley SW, 2006, NAT NEUROSCI, V9, P940, DOI 10.1038/nn1724
Kiguchi K, 2012, IEEE T SYST MAN CY B, V42, P1064, DOI
10.1109/TSMCB.2012.2185843
Kimura H, 1998, P 15 INT C MACH LEAR, P284
Kobayashi H., 2009, INT J AUTOMATION TEC, V3, P709
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
Lebedev MA, 2006, TRENDS NEUROSCI, V29, P536, DOI 10.1016/j.tins.2006.07.004
Lisi G, 2014, FRONT SYST NEUROSCI, V8, DOI 10.3389/fnsys.2014.00085
Lungarella M, 2003, CONNECT SCI, V15, P151, DOI 10.1080/09540090310001655110
Marr D, 1982, VISION
Matsubara T, 2006, ROBOT AUTON SYST, V54, P911, DOI 10.1016/j.robot.2006.05.012
McFarland DJ, 2000, BRAIN TOPOGR, V12, P177, DOI 10.1023/A:1023437823106
MEXT Japan, STRAT RES PROGR BRAI
MIYAMOTO H, 1988, NEURAL NETWORKS, V1, P251, DOI 10.1016/0893-6080(88)90030-5
Miyamoto H, 1996, NEURAL NETWORKS, V9, P1281, DOI 10.1016/S0893-6080(96)00043-3
Morimoto J, 1999, ADV ROBOTICS, V13, P267, DOI 10.1163/156855399X01314
Morimoto J, 2003, ADV NEURAL INFORMATI, V15, P1563
Morimoto J, 2008, IEEE INT CONF ROBOT, P2711, DOI 10.1109/ROBOT.2008.4543621
Morimoto J, 2006, IEEE INT CONF ROBOT, P1579, DOI 10.1109/ROBOT.2006.1641932
Morimoto J, 2008, IEEE T ROBOT, V24, P185, DOI 10.1109/TRO.2008.915457
Morimoto J, 2007, IEEE ROBOT AUTOM MAG, V14, P41, DOI 10.1109/MRA.2007.380654
Muller-Putz GR, 2009, IEEE ENG MED BIO, P3353, DOI 10.1109/IEMBS.2009.5333185
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Nicolelis MA, WALK AGAIN PROJECT
Nicolelis MAL, 2001, NATURE, V409, P403, DOI 10.1038/35053191
National Institute of Health (NIH), THE BRAIN IN
Noda T., 2012, P IEEE RAS INT C HUM, P21
Normile D, 1997, SCIENCE, V275, P1562, DOI 10.1126/science.275.5306.1562
O'Doherty J, 2004, SCIENCE, V304, P452, DOI 10.1126/science.1094285
Pfurtscheller G, 2003, NEUROSCI LETT, V351, P33, DOI 10.1016/S0304-
3940(03)00947-9
Pollard NS, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1390, DOI 10.1109/ROBOT.2002.1014737
RILEY M, 2000, P 2000 WORKSH INT RO, P35
Rizzolatti G, 2004, ANNU REV NEUROSCI, V27, P169, DOI
10.1146/annurev.neuro.27.070203.144230
Rugh W. J., 1991, IEEE Control Systems Magazine, V11, P79, DOI 10.1109/37.103361
Samejima K, 2005, SCIENCE, V310, P1337, DOI 10.1126/science.1115270
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Schaal S, 1998, NEURAL COMPUT, V10, P2047, DOI 10.1162/089976698300016963
Scheinost D, 2013, TRANSL PSYCHIAT, V3, DOI 10.1038/tp.2013.24
Shibata K, 2013, SOC NEUR M SAN DIEG
Shibata K, 2011, SCIENCE, V334, P1413, DOI 10.1126/science.1212003
Shibata T, 2005, NEURAL NETWORKS, V18, P213, DOI 10.1016/j.neunet.2005.01.001
Shibata T, 2001, NEURAL NETWORKS, V14, P201, DOI 10.1016/S0893-6080(00)00084-8
Shindo K, 2011, J REHABIL MED, V43, P951, DOI 10.2340/16501977-0859
SINGH SP, 1992, MACH LEARN, V8, P323, DOI 10.1023/A:1022680823223
Slotine J., 1990, APPL NONLINEAR CONTR
Strogatz S.H., 1994, NONLINEAR DYNAMICS C
Subramanian L, 2011, J NEUROSCI, V31, P16309, DOI 10.1523/JNEUROSCI.3498-11.2011
Sugimoto N, 2012, NEURAL NETWORKS, V29-30, P8, DOI 10.1016/j.neunet.2012.01.002
Sugimoto N, 2012, NEURAL COMPUT, V24, P577, DOI 10.1162/NECO_a_00246
Sutton RS, 1999, ARTIF INTELL, V112, P181, DOI 10.1016/S0004-3702(99)00052-1
Sutton RS, 1998, REINFORCEMENT LEARNI
Suzuki K, 2007, ADV ROBOTICS, V21, P1441, DOI 10.1163/156855307781746061
Tanaka SC, 2006, NEURAL NETWORKS, V19, P1233, DOI 10.1016/j.neunet.2006.05.039
Tomioka R, 2010, NEUROIMAGE, V49, P415, DOI 10.1016/j.neuroimage.2009.07.045
Ude A, 2004, ROBOT AUTON SYST, V47, P93, DOI 10.1016/j.robot.2004.03.004
Ude A, 2003, ADV ROBOTICS, V17, P165, DOI 10.1163/156855303321165114
Ude A, 2010, IEEE T ROBOT, V26, P800, DOI 10.1109/TRO.2010.2065430
Velliste M, 2008, NATURE, V453, P1098, DOI 10.1038/nature06996
Wagner J, 2012, NEUROIMAGE, V63, P1203, DOI 10.1016/j.neuroimage.2012.08.019
Wang YJ, 2007, P ANN INT IEEE EMBS, P5059, DOI 10.1109/IEMBS.2007.4353477
Williams ZM, 2006, NAT NEUROSCI, V9, P562, DOI 10.1038/nn1662
Wolpaw JR, 2007, J PHYSIOL-LONDON, V579, P613, DOI 10.1113/jphysiol.2006.125948
Wolpaw JR, 2004, P NATL ACAD SCI USA, V101, P17849, DOI 10.1073/pnas.0403504101
WOLPERT DM, 1995, SCIENCE, V269, P1880, DOI 10.1126/science.7569931
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
Yamamoto G., 2009, IEEE RSJ INT C INT R, P5801
Yamashita O, 2008, NEUROIMAGE, V42, P1414, DOI 10.1016/j.neuroimage.2008.05.050
NR 118
TC 5
Z9 5
U1 2
U2 18
PU ROYAL SOC
PI LONDON
PA 6-9 CARLTON HOUSE TERRACE, LONDON SW1Y 5AG, ENGLAND
SN 1742-5689
EI 1742-5662
J9 J R SOC INTERFACE
JI J. R. Soc. Interface
PD MAR 6
PY 2015
VL 12
IS 104
AR 20141250
DI 10.1098/rsif.2014.1250
PG 15
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA CD6VC
UT WOS:000351227000026
PM 25589568
OA gold
DA 2018-01-22
ER

PT J
AU Takano, W
Asfour, T
Kormushev, P
AF Takano, Wataru
Asfour, Tamim
Kormushev, Petar
TI Special Issue on Humanoid Robotics PREFACE
SO ADVANCED ROBOTICS
LA English
DT Editorial Material
C1 [Takano, Wataru] Univ Tokyo, Tokyo 1138654, Japan.
[Asfour, Tamim] Karlsruhe Inst Technol, D-76021 Karlsruhe, Germany.
[Kormushev, Petar] Italian Inst Technol, Genoa, Italy.
RP Takano, W (reprint author), Univ Tokyo, Tokyo 1138654, Japan.
NR 0
TC 0
Z9 0
U1 0
U2 1
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD MAR 4
PY 2015
VL 29
IS 5
SI SI
BP 301
EP 301
DI 10.1080/01691864.2015.1015216
PG 1
WC Robotics
SC Robotics
GA CD9UY
UT WOS:000351445500001
OA gold
DA 2018-01-22
ER

PT J
AU Hashimoto, K
Takanishi, A
AF Hashimoto, Kenji
Takanishi, Atsuo
TI Biped Robot Research at Waseda University
SO JOURNAL OF ROBOTICS NETWORKING AND ARTIFICIAL LIFE
LA English
DT Article
DE Biped walking; Running; Humanoid robot; Legged robot; Walking chair
ID STIFFNESS
AB Waseda University has researched on biped robots since 1967. This paper
describes our latest biped robots: (i) WABIAN-2, (ii) a biped running robot, and
(iii) WL-16. WABIAN-2 is a biped humanoid robot and has realized a human-like walk
with the knees stretched by utilizing a 2-DOF waist mimicking a human's pelvic
motion. We are developing a new biped humanoid robot which can jump and run by
utilizing a pelvic movement and leg elasticity. WL-16 is a human-carrying biped
vehicle.
C1 [Hashimoto, Kenji] Waseda Univ, Res Inst Sci & Engn, Shinjuku Ku, Tokyo 1620044,
Japan.
[Takanishi, Atsuo] Waseda Univ, Dept Modern Mech Engn, Shinjuku Ku, Tokyo
1628480, Japan.
RP Hashimoto, K (reprint author), Waseda Univ, Res Inst Sci & Engn, Shinjuku Ku,
41-304,17 Kikui Cho, Tokyo 1620044, Japan.
EM contact@takanishi.mech.waseda.ac.jp
RI Hashimoto, Kenji/M-6442-2017
OI Hashimoto, Kenji/0000-0003-2300-6766
CR Gunther M, 2002, J BIOMECH, V35, P1459, DOI 10.1016/S0021-9290(02)00183-5
Hashimoto K., 2010, P 2010 IEEE RSJ INT, P2206
Hashimoto K, 2011, ADV ROBOTICS, V25, P407, DOI 10.1163/016918610X552178
MCMAHON TA, 1990, J BIOMECH, V23, P65, DOI 10.1016/0021-9290(90)90042-2
Ogura Y., 2006, P 1 IEEE RAS EMBS IN, P835
Ogura Y, 2006, IEEE INT CONF ROBOT, P76
OTANI T, 2014, P IEEE INT C ROB AUT, P2313
NR 7
TC 0
Z9 0
U1 0
U2 3
PU ATLANTIS PRESS
PI PARIS
PA 29 AVENUE LAUMIERE, PARIS, 75019, FRANCE
SN 2352-6386
J9 J ROBOT NETW ARTIF L
JI J. Robot Netw. Artif. Life
PD MAR
PY 2015
VL 1
IS 4
BP 261
EP 264
DI 10.2991/jrnal.2015.1.4.4
PG 4
WC Robotics
SC Robotics
GA DA1XV
UT WOS:000367590200004
OA gold
DA 2018-01-22
ER

PT J
AU Ma, JX
Zhang, Y
Cichocki, A
Matsuno, F
AF Ma, Jiaxin
Zhang, Yu
Cichocki, Andrzej
Matsuno, Fumitoshi
TI A Novel EOG/EEG Hybrid Human-Machine Interface Adopting Eye Movements
and ERPs: Application to Robot Control
SO IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING
LA English
DT Article
DE Electroencephalogram (EEG); electrooculogram (EOG); event-related
potential (ERP); human-machine interface (HMI); robot control
ID BRAIN-COMPUTER-INTERFACE; NEAR-INFRARED SPECTROSCOPY; ERROR-RELATED
POTENTIALS; BCI SYSTEM; COMBINING P300; CLASSIFICATION; SIGNALS; SSVEP;
PERFORMANCE; SPELLER
AB This study presents a novel human-machine interface (HMI) based on both
electrooculography (EOG) and electroencephalography (EEG). This hybrid interface
works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG
mode detects event related potentials (ERPs) like P300. While both eye movements
and ERPs have been separately used for implementing assistive interfaces, which
help patients with motor disabilities in performing daily tasks, the proposed
hybrid interface integrates them together. In this way, both the eye movements and
ERPs complement each other. Therefore, it can provide a better efficiency and a
wider scope of application. In this study, we design a threshold algorithm that can
recognize four kinds of eye movements including blink, wink, gaze, and frown. In
addition, an oddball paradigm with stimuli of inverted faces is used to evoke
multiple ERP components including P300, N170, and VPP. To verify the effectiveness
of the proposed system, two different online experiments are carried out. One is to
control a multifunctional humanoid robot, and the other is to control four mobile
robots. In both experiments, the subjects can complete tasks effectively by using
the proposed interface, whereas the best completion time is relatively short and
very close to the one operated by hand.
C1 [Ma, Jiaxin; Matsuno, Fumitoshi] Kyoto Univ, Dept Mech Engn & Sci, Sch Engn,
Kyoto 6158530, Japan.
[Zhang, Yu] E China Univ Sci & Technol, Key Lab Adv Control & Optimizat Chem
Proc, Minist Educ, Shanghai 200237, Peoples R China.
[Cichocki, Andrzej] RIKEN Brain Sci Inst, Lab Adv Brain Signal Proc, Wako,
Saitama 3510198, Japan.
[Cichocki, Andrzej] Polish Acad Sci, Syst Res Inst, PL-00901 Warsaw, Poland.
RP Ma, JX (reprint author), Kyoto Univ, Dept Mech Engn & Sci, Sch Engn, Kyoto
6158530, Japan.
EM ma.jiaxin.62c@st.kyoto-u.ac.jp; yuzhang@ecust.edu.cn;
a.cichocki@riken.jp; matsuno@me.kyoto-u.ac.jp
OI Cichocki, Andrzej/0000-0002-8364-7226
FU Nation Nature Science Foundation of China [61305028]; Fundamental
Research Funds for the Central Universities [WH1314023]
FX This work was supported in part by the Nation Nature Science Foundation
of China under Grant 61305028, Fundamental Research Funds for the
Central Universities under Grant WH1314023.
CR Abootalebi V, 2009, COMPUT METH PROG BIO, V94, P48, DOI
10.1016/j.cmpb.2008.10.001
Allison BZ, 2010, J NEURAL ENG, V7, DOI 10.1088/1741-2560/7/2/026007
Allison B, 2010, IEEE T NEUR SYS REH, V18, P107, DOI 10.1109/TNSRE.2009.2039495
Allison BZ, 2003, IEEE T NEUR SYS REH, V11, P110, DOI 10.1109/TNSRE.2003.814448
Amiri S, 2013, BRAIN-COMPUTER INTERFACE SYSTEMS - RECENT PROGRESS AND FUTURE
PROSPECTS, P195, DOI 10.5772/56135
Barea R, 2002, IEEE T NEUR SYS REH, V10, P209, DOI 10.1109/TNSRE.2002.806829
Barea R, 2002, J INTELL ROBOT SYST, V34, P279, DOI 10.1023/A:1016359503796
Bin GY, 2009, J NEURAL ENG, V6, DOI 10.1088/1741-2560/6/4/046002
Birbaumer N, 1999, NATURE, V398, P297, DOI 10.1038/18581
Blankertz B, 2011, NEUROIMAGE, V56, P814, DOI 10.1016/j.neuroimage.2010.06.048
Brunner C, 2011, J NEURAL ENG, V8, DOI 10.1088/1741-2560/8/2/025010
Bulling A, 2011, IEEE T PATTERN ANAL, V33, P741, DOI 10.1109/TPAMI.2010.86
Combaz A, 2012, NEUROCOMPUTING, V80, P73, DOI 10.1016/j.neucom.2011.09.013
Deng LY, 2010, EXPERT SYST APPL, V37, P3337, DOI 10.1016/j.eswa.2009.10.017
Edlinger G, 2011, LECT NOTES COMPUT SC, V6762, P417
FARWELL LA, 1988, ELECTROEN CLIN NEURO, V70, P510, DOI 10.1016/0013-
4694(88)90149-6
Fazli S, 2012, NEUROIMAGE, V59, P519, DOI 10.1016/j.neuroimage.2011.07.084
GRATTON G, 1983, ELECTROEN CLIN NEURO, V55, P468, DOI 10.1016/0013-
4694(83)90135-9
Hoffmann U, 2008, J NEUROSCI METH, V167, P115, DOI
10.1016/j.jneumeth.2007.03.005
Itier RJ, 2006, NEUROIMAGE, V29, P667, DOI 10.1016/j.neuroimage.2005.07.041
Jammes B., 2008, SOMNOLOGIE, V12, P227
Kaufmann T, 2011, J NEURAL ENG, V8, DOI 10.1088/1741-2560/8/5/056016
Hong M. J., 2014, FRONTIERS HUMAN NEUR, V8
Krusienski DJ, 2006, J NEURAL ENG, V3, P299, DOI 10.1088/1741-2560/3/4/007
Lee JH, 2009, NEUROSCI LETT, V450, P1, DOI 10.1016/j.neulet.2008.11.024
Leeb R, 2011, J NEURAL ENG, V8, DOI 10.1088/1741-2560/8/2/025011
Li J, 2014, INT J NEURAL SYST, V24, DOI 10.1142/S0129065714500142
Li YQ, 2013, IEEE T BIO-MED ENG, V60, P3156, DOI 10.1109/TBME.2013.2270283
Li YQ, 2010, IEEE T BIO-MED ENG, V57, P2495, DOI 10.1109/TBME.2010.2055564
Ma JX, 2013, IEEE INT C INT ROBOT, P859, DOI 10.1109/IROS.2013.6696451
Mak JN, 2012, J NEURAL ENG, V9, DOI 10.1088/1741-2560/9/2/026014
Mellinger J, 2007, NEUROIMAGE, V36, P581, DOI 10.1016/j.neuroimage.2007.03.019
Mussa-Ivaldi FA, 2003, TRENDS NEUROSCI, V26, P329, DOI 10.1016/S0166-
2236(03)00121-8
Naseer N, 2014, EXP BRAIN RES, V232, P555, DOI 10.1007/s00221-013-3764-1
Naseer N, 2013, NEUROSCI LETT, V553, P84, DOI 10.1016/j.neulet.2013.08.021
Panicker RC, 2011, IEEE T BIO-MED ENG, V58, P1781, DOI 10.1109/TBME.2011.2116018
Pfurtscheller G, 1999, CLIN NEUROPHYSIOL, V110, P1842, DOI 10.1016/S1388-
2457(99)00141-8
Pfurtscheller G, 2010, IEEE T NEUR SYS REH, V18, P409, DOI
10.1109/TNSRE.2010.2040837
Punsawad Y, 2010, IEEE ENG MED BIO, P1360, DOI 10.1109/IEMBS.2010.5626745
Rebsamen B., 2008, P 2 INT CONV REH ENG, P51
Riechmann H, 2011, I IEEE EMBS C NEUR E, P412, DOI 10.1109/NER.2011.5910574
Savic A, 2012, IFMBE PROC, V37, P806
SEMLITSCH HV, 1986, PSYCHOPHYSIOLOGY, V23, P695, DOI 10.1111/j.1469-
8986.1986.tb00696.x
Spuler M, 2012, CLIN NEUROPHYSIOL, V123, P1328, DOI 10.1016/j.clinph.2011.11.082
Su Y, 2011, J ZHEJIANG U-SCI C, V12, P351, DOI 10.1631/jzus.C1000208
SUTTON S, 1965, SCIENCE, V150, P1187, DOI 10.1126/science.150.3700.1187
Turnip A, 2012, INT J INNOV COMPUT I, V8, P6429
Turnip A, 2011, BIOMED ENG ONLINE, V10, DOI 10.1186/1475-925X-10-83
Wolpaw JR, 2006, IEEE T NEUR SYS REH, V14, P138, DOI 10.1109/TNSRE.2006.875583
Wolpaw JR, 2002, CLIN NEUROPHYSIOL, V113, P767, DOI 10.1016/S1388-2457(02)00057-
3
Yin EW, 2014, IEEE T BIO-MED ENG, V61, P473, DOI 10.1109/TBME.2013.2281976
Zhang Y, 2013, IEEE T NEUR SYS REH, V21, P233, DOI 10.1109/TNSRE.2013.2243471
Zhang Y, 2012, J NEURAL ENG, V9, DOI 10.1088/1741-2560/9/2/026018
NR 53
TC 23
Z9 24
U1 5
U2 44
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 0018-9294
EI 1558-2531
J9 IEEE T BIO-MED ENG
JI IEEE Trans. Biomed. Eng.
PD MAR
PY 2015
VL 62
IS 3
BP 876
EP 889
DI 10.1109/TBME.2014.2369483
PG 14
WC Engineering, Biomedical
SC Engineering
GA CE3ZI
UT WOS:000351768500009
PM 25398172
DA 2018-01-22
ER

PT J
AU Cooney, MD
Nishio, S
Ishiguro, H
AF Cooney, Martin D.
Nishio, Shuichi
Ishiguro, Hiroshi
TI Importance of Touch for Conveying Affection in a Multimodal Interaction
with a Small Humanoid Robot
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Affection; humanoid robot; touch; multimodal; human-robot interaction
ID TECHNOLOGY ACCEPTANCE MODEL; SPONTANEOUS EXPRESSION; BEHAVIOR;
COMMUNICATION; ORIENTATION; MOTIVATION; EMOTION
AB To be accepted as a part of our everyday lives, companion robots will require
the capability to communicate socially, recognizing people's behavior and
responding appropriately. In particular, we hypothesized that a humanoid robot
should be able to recognize affectionate touches conveying liking or dislike
because (a) a humanoid form elicits expectations of a high degree of social
intelligence, (b) touch behavior plays a fundamental and crucial role in human
bonding, and (c) robotic responses providing affection could contribute to people's
quality of life. The hypothesis that people will seek to affectionately touch a
robot needed to be verified because robots are typically not soft or warm like
humans, and people can communicate through various other modalities such as vision
and sound. The main challenge faced was that people's social norms are highly
complex, involving behavior in multiple channels. To deal with this challenge, we
adopted an approach in which we analyzed free interactions and also asked
participants to rate short video-clips depicting human-robot interaction. As a
result, we verified that touch plays an important part in the communication of
affection from a person to a humanoid robot considered capable of recognizing cues
in touch, vision, and sound. Our results suggest that designers of affectionate
interactions with a humanoid robot should not ignore the fundamental modality of
touch.
C1 [Cooney, Martin D.; Nishio, Shuichi; Ishiguro, Hiroshi] Adv Telecommun Res Inst
Int ATR, Hiroshi Ishiguro Lab, Kyoto 6190288, Japan.
[Cooney, Martin D.; Ishiguro, Hiroshi] Osaka Univ, Grad Sch Engn Sci, Dept Syst
Innovat, Toyonaka, Osaka 5608531, Japan.
RP Cooney, MD (reprint author), Adv Telecommun Res Inst Int ATR, Hiroshi Ishiguro
Lab, 2-2-2 Hikaridai, Kyoto 6190288, Japan.
EM mcooney@atr.jp; nishio@botransfer.org; ishiguro@atr.jp
FU JST, CREST
FX We would like to thank everyone who helped. This research was supported
by JST, CREST.
CR AMBADY N, 1992, PSYCHOL BULL, V111, P256, DOI 10.1037/0033-2909.111.2.256
Barrett LF, 2006, J RES PERS, V40, P35, DOI 10.1016/j.jrp.2005.08.006
BOWLBY J, 1958, INT J PSYCHOANAL, V39, P350
Breazeal C, 2002, AUTON ROBOT, V12, P83, DOI 10.1023/A:1013215010749
Broekens J, 2007, LECT NOTES COMPUT SC, V4451, P113
Brooks AG, 2007, AUTON ROBOT, V22, P55, DOI 10.1007/s10514-006-9005-8
Bull P. E., 1987, POSTURE AND ACTION
Cacioppo J. T., 2008, LONELINESS HUMAN NAT
Chartrand TL, 1999, J PERS SOC PSYCHOL, V76, P893, DOI 10.1037/0022-
3514.76.6.893
Cooney MD, 2012, IEEE INT C INT ROBOT, P1420, DOI 10.1109/IROS.2012.6385956
Cooney MD, 2010, IEEE RSJ INT C INT R, P2276, DOI DOI 10.1109/IROS.2010.5650081
Csikszentmihalyi M., 1993, DEV PERSPECTIVES MOT
Cushman W. H., 1991, HUMAN FACTORS PRODUC
Dautenhahn Kerstin, 2009, Applied Bionics and Biomechanics, V6, P369, DOI
10.1080/11762320903123567
Dore J., 1978, CHILDRENS LANGUAGE, V1, P397
Ekman P., 1981, NONVERBAL COMMUNICAT
EKMAN PAUL, 1969, SEMIOTICA, V1, P49
Ellis M. J., 1973, WHY PEOPLE PLAY
Francois D., 2007, ACM IEEE INT C HUM R, P295
Gada N. M., 1997, LOVE OBSERVERS PERCE
Glaser B.G., 1967, DISCOVERY GROUNDED T
Grammer K, 2000, EVOL HUM BEHAV, V21, P371, DOI 10.1016/S1090-5138(00)00053-2
HALL Edward T., 1966, HIDDEN DIMENSION
HARLOW HF, 1958, AM PSYCHOL, V13, P673, DOI 10.1037/h0047884
Heerink M., 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot
Interaction (HRI 2008), P113
Hertenstein MJ, 2006, EMOTION, V6, P528, DOI 10.1037/1528-3542.6.3.528
JONES SE, 1985, COMMUN MONOGR, V52, P19, DOI 10.1080/03637758509376094
Kanamori M, 2003, 2003 IEEE INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL
INTELLIGENCE IN ROBOTICS AND AUTOMATION, VOLS I-III, PROCEEDINGS, P107
KIRCH MS, 1979, MOD LANG J, V63, P416, DOI 10.2307/326027
Knapp M. L., 2006, NONVERBAL COMMUNICAT
Knight H., 2009, INT C INT ROB SYST, P3715
Laeng B, 2007, HORM BEHAV, V52, P520, DOI 10.1016/j.yhbeh.2007.07.013
LAFRANCE M, 1995, PERS SOC PSYCHOL B, V21, P207, DOI 10.1177/0146167295213002
Liu CR, 2013, INT J HUM ROBOT, V10, DOI 10.1142/S0219843613500096
Manning C. D., 2008, INTRO INFORM RETRIEV, DOI [10.1017/CBO9780511809071.007,
DOI 10.1017/CBO9780511809071.007]
Maslow AH, 1943, PSYCHOL REV, V50, P370, DOI 10.1037/h0054346
McCowan I, 2005, IEEE T PATTERN ANAL, V27, P305, DOI 10.1109/TPAMI.2005.49
MEHRABIAN A, 1967, J COMMUN, V17, P324
MEHRABIAN A, 1967, J PERS SOC PSYCHOL, V6, P109, DOI 10.1037/h0024532
Mehrabian A., 1967, J CONSULT PSYCHOL, V31, P48
Minato T, 2006, ADV ROBOTICS, V20, P1147, DOI 10.1163/156855306778522505
Noda Tomoyuki, 2007, 2007 IEEE/RSJ International Conference on Intelligent
Robots and Systems, P1099
Ogawa K., 2011, J ADV COMPUT INTELL, V15, P592, DOI DOI
10.20965/JACIII2011.P0592
Oosterhout T. V., 2008, ACM IEEE INT C HUM R
Oztop E., 2005, INT J HUM ROBOT, V2, P537, DOI DOI 10.1142/S0219843605000582
Pantic M., 2011, VISUAL ANAL HUMANS, P511, DOI DOI 10.1007/978-0-85729-997-0_26
Poggi I, 2010, LREC 2010 - SEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE
RESOURCES AND EVALUATION, P2570
PRESCOTT JW, 1975, B ATOM SCI, V31, P10, DOI 10.1080/00963402.1975.11458292
Robertson R., 1995, GLOBAL MODERNITIES
RUSSELL JA, 1980, J PERS SOC PSYCHOL, V39, P1161, DOI 10.1037/h0077714
Sayette MA, 2001, J NONVERBAL BEHAV, V25, P167, DOI 10.1023/A:1010671109788
Stiehl WD, 2005, LECT NOTES COMPUT SC, V3784, P747
Tracy JL, 2008, P NATL ACAD SCI USA, V105, P11655, DOI 10.1073/pnas.0802686105
Turkle S., 2004, IEEE RAS INT C HUM R
Venkatesh V, 2000, INFORM SYST RES, V11, P342, DOI 10.1287/isre.11.4.342.11872
Verderber K. S., 2009, INTERACT INTERPERSON
Vinciarelli A., 2009, IEEE INT C COMP VIS
Wada K, 2007, IEEE T ROBOT, V23, P972, DOI 10.1109/TRO.2007.906261
Williams LE, 2008, SCIENCE, V322, P606, DOI 10.1126/science.1162548
Wilson RS, 2007, ARCH GEN PSYCHIAT, V64, P234, DOI 10.1001/archpsyc.64.2.234
Yi MY, 2003, INT J HUM-COMPUT ST, V59, P431, DOI [10.1016/S1071-5819(03)00114-9,
10.1016/S0171-5819(03)00114-9]
Yohanan S, 2012, INT J SOC ROBOT, V4, P163, DOI 10.1007/s12369-011-0126-7
NR 62
TC 3
Z9 3
U1 2
U2 19
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2015
VL 12
IS 1
AR 1550002
DI 10.1142/S0219843615500024
PG 22
WC Robotics
SC Robotics
GA CD6UP
UT WOS:000351225400002
DA 2018-01-22
ER

PT J
AU Takano, W
Jodan, T
Nakamura, Y
AF Takano, Wataru
Jodan, Tatsuhiro
Nakamura, Yoshihiko
TI Recursive process of motion recognition and generation for action-based
interaction
SO ADVANCED ROBOTICS
LA English
DT Article
DE motion recognition/synthesis; whole-body motion planning; symbolization
of motions
ID ROBOT; MODEL; MIND
AB To integrate humanoid robots into daily life, it is necessary to develop the
robots that can understand human behaviors and generate human-like behaviors
adaptive to their human partners. In previous robotics research, frameworks of
encoding human behaviors into motion primitive models, recognizing observed human
actions as the motion primitives, and synthesizing motions from the motion
primitive models have been developed. The bidirectional process of motion
recognition and motion synthesis can be helpful to establish intuitive behavioral
interaction between humans and robots. This paper proposes a novel approach to
realizing a robot intelligence for the human-robot interaction. This approach
symbolizes perception and actions using hidden Markov models. The recursive process
of motion recognition and synthesis based on the hidden Markov models enables a
robot to understand that the human partner recognizes robot's actions and to
perform actions accordingly. The proposed framework was applied to an example of
human-robot interactions where robot walks through a group of several partners.
This application was tested on captured human behaviors, and the validity of this
framework is demonstrated through the experiments.
C1 [Takano, Wataru; Jodan, Tatsuhiro; Nakamura, Yoshihiko] Univ Tokyo, Dept
Mechanoinformat, Tokyo, Japan.
RP Takano, W (reprint author), Univ Tokyo, Dept Mechanoinformat, 7-3-1 Bunkyoku,
Tokyo, Japan.
EM takano@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102
CR Billard A, 2004, ROBOT AUTON SYST, V47, P65, DOI 10.1016/j.robot.2004.03.001
Breazeal C, 2000, ADAPT BEHAV, V8, P49, DOI 10.1177/105971230000800104
BUTTERWORTH G, 1991, BRIT J DEV PSYCHOL, V9, P55, DOI 10.1111/j.2044-
835X.1991.tb00862.x
Donald M., 2011, ORIGIN MODERN MIND
Gallese V, 1998, TRENDS COGN SCI, V2, P493, DOI 10.1016/S1364-6613(98)01262-5
Haruno M, 2001, NEURAL COMPUT, V13, P2201, DOI 10.1162/089976601750541778
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
Mataric MJ, 2000, IEEE INTELL SYST APP, V15, P18, DOI 10.1109/5254.867908
Morimoto J, 1999, ADV ROBOTICS, V13, P267, DOI 10.1163/156855399X01314
Nagai S, 2004, JPN SOC ARTIF INTELL, V19, P10
Pedica C, 2010, APPL ARTIF INTELL, V24, P575, DOI 10.1080/08839514.2010.492165
PREMACK D, 1978, BEHAV BRAIN SCI, V1, P515, DOI 10.1017/S0140525X00076512
Ratsamee P., 2013, INT J HUM ROBOT, V10
Rizzolatti G, 2001, NAT REV NEUROSCI, V2, P661, DOI 10.1038/35090060
Shiomi M, 2014, INT J SOC ROBOT, V6, P443, DOI 10.1007/s12369-014-0238-y
Snape J, 2011, IEEE T ROBOT, V27, P696, DOI 10.1109/TRO.2011.2120810
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
NR 18
TC 0
Z9 0
U1 0
U2 8
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD FEB 16
PY 2015
VL 29
IS 4
BP 287
EP 299
DI 10.1080/01691864.2014.977946
PG 13
WC Robotics
SC Robotics
GA CB3IY
UT WOS:000349522600002
OA gold
DA 2018-01-22
ER

PT J
AU Rosendo, A
Liu, XX
Shimizu, M
Hosoda, K
AF Rosendo, Andre
Liu, Xiangxiao
Shimizu, Masahiro
Hosoda, Koh
TI Stretch reflex improves rolling stability during hopping of a
decerebrate biped system
SO BIOINSPIRATION & BIOMIMETICS
LA English
DT Article
DE stretch reflex; hopping stability; decerebrate; frontal plane
ID UNRESTRAINED LOCOMOTION; FEEDBACK; CAT; VERTEBRATES; MODULATION;
MUSCLES; HUMANS; ROBOT; MODEL; GAIT
AB When humans hop, attitude recovery can be observed in both the sagittal and
frontal planes. While it is agreed that the brain plays an important role in leg
placement, the role of low-level feedback (the stretch reflex) on frontal plane
stabilization remains unclear. Seeking to better understand the contribution of the
soleus stretch reflex to rolling stability, we performed experiments on a
biomimetic humanoid hopping robot. Various reflex responses to touching the floor,
ranging from no response to long muscle activations, were examined, and the effect
of a delay upon touching the floor was also examined. We found that the stretch
reflex brought the system closer to stable, straight hopping. The presence of a
delay did not affect the results; both the cases with and without a delay
outperformed the case without a reflex response. The results of this study
highlight the importance of low-level control in locomotion for which body
stabilization does not require higher-level signals.
C1 [Rosendo, Andre; Liu, Xiangxiao; Shimizu, Masahiro; Hosoda, Koh] Osaka Univ,
Grad Sch Engn Sci, Osaka, Japan.
RP Rosendo, A (reprint author), Osaka Univ, Grad Sch Engn Sci, Osaka, Japan.
EM andre.rosendo@arl.sys.es.osaka-u.ac.jp
CR ALEXANDER RM, 1995, PHILOS T ROY SOC B, V347, P235, DOI 10.1098/rstb.1995.0024
Boblan I, 2004, EMBODIED ARTIFICIAL
Calisti M, 2011, BIOINSPIR BIOMIM, V6, DOI 10.1088/1748-3182/6/3/036002
Carpenter MG, 1999, EXP BRAIN RES, V129, P93, DOI 10.1007/s002210050940
Chou C-P, 1994, P IEEE INT C ROB AUT
Daley M, 2008, CURR BIOL, V18, P1064
DYHREPOULSEN P, 1991, J PHYSIOL-LONDON, V437, P287, DOI
10.1113/jphysiol.1991.sp018596
ENGBERG I, 1969, ACTA PHYSIOL SCAND, V75, P614, DOI 10.1111/j.1748-
1716.1969.tb04415.x
Fallon JB, 2005, J NEUROPHYSIOL, V94, P3795, DOI 10.1152/jn.00359.2005
Fibley RA, 1969, PSYCHON SCI, V17, P335
Geyer H, 2003, P ROY SOC B-BIOL SCI, V270, P2173, DOI 10.1098/rspb.2003.2454
Geyer H, 2010, IEEE T NEUR SYS REH, V18, P263, DOI 10.1109/TNSRE.2010.2047592
GOSLOW GE, 1973, J MORPHOL, V141, P1, DOI 10.1002/jmor.1051410102
GRILLNER S, 1975, PHYSIOL REV, V55, P247
Haeufle DFB, 2012, J R SOC INTERFACE, V9, P1458, DOI 10.1098/rsif.2011.0694
Haeufle DFB, 2010, BIOINSPIR BIOMIM, V5, DOI 10.1088/1748-3182/5/1/016004
Hiebert GW, 1999, J NEUROPHYSIOL, V81, P758
Hosoda K, 2010, AUTON ROBOT, V28, P307, DOI 10.1007/s10514-009-9171-6
JONES GM, 1971, J PHYSIOL-LONDON, V219, P709, DOI 10.1113/jphysiol.1971.sp009684
Kalveram KT, 2009, J PHYSIOLOGY-PARIS, V103, P232, DOI
10.1016/j.jphysparis.2009.08.006
Klute GK, 2002, INT J ROBOT RES, V21, P295, DOI 10.1177/027836402320556331
Kukillaya R, 2009, CHAOS, V19, DOI 10.1063/1.3141306
Marques HG, 2014, PLOS COMPUT BIOL, V10, DOI 10.1371/journal.pcbi.1003653
Muybridge R, 1888, ANIMAL LOCOMOTION MU
Paul C, 2005, BIOL CYBERN, V93, P153, DOI 10.1007/s00422-005-0559-x
Perreault EJ, 2003, J BIOMECH, V36, P211, DOI 10.1016/S0021-9290(02)00332-9
Rosendo A, 2014, ADV ROBOTICS, V28, P351, DOI 10.1080/01691864.2013.870495
Rosendo A., 2013, P IEEE INT C INT ROB
Sinkjaer T, 1996, J NEUROPHYSIOL, V76, P1112
van Soest J, 1993, BIOL CYBERN, V69, P195
Voigt M, 1998, EUR J APPL PHYSIOL O, V78, P522, DOI 10.1007/s004210050455
Yakovenko S, 2004, BIOL CYBERN, V90, P146, DOI 10.1007/s00422-003-0449-z
NR 32
TC 5
Z9 5
U1 3
U2 8
PU IOP PUBLISHING LTD
PI BRISTOL
PA TEMPLE CIRCUS, TEMPLE WAY, BRISTOL BS1 6BE, ENGLAND
SN 1748-3182
EI 1748-3190
J9 BIOINSPIR BIOMIM
JI Bioinspir. Biomim.
PD FEB
PY 2015
VL 10
IS 1
AR 016008
DI 10.1088/1748-3190/10/1/016008
PG 12
WC Engineering, Multidisciplinary; Materials Science, Biomaterials;
Robotics
SC Engineering; Materials Science; Robotics
GA CB3VT
UT WOS:000349558000008
PM 25599138
DA 2018-01-22
ER

PT J
AU Takano, W
Nakamura, Y
AF Takano, Wataru
Nakamura, Yoshihiko
TI Construction of a space of motion labels from their mapping to full-body
motion symbols
SO ADVANCED ROBOTICS
LA English
DT Article
DE motion-label space; motion recognition/synthesis; symbolization of
motions
ID IMITATION; RECOGNITION
AB Language is an indispensable for humanoid robot to be integrated into daily
life. This paper proposes a novel approach to construct a space of motion labels
from their mapping to human whole body motions. The motions are abstracted by
Hidden Markov Models, which are referred to as motion symbols. The human motions
are automatically partitioned into motion segments, and recognized as sequences of
the motion symbols. Sequences of motion labels are also assigned to these motions.
The referential relationship between the motion symbols and the motion labels is
extracted by stochastic translation model, and distances among the labels are
calculated from the association probability of the motion symbols being generated
by the labels. The labels are located in a multidimensional space so that the
distances are satisfied, and it results in a label space. The label space
encapsulates relations among the motion labels such as their similarities. The
label space also allows motion recognition. The validity of the constructed label
space is demonstrated on a motion capture data-set.
C1 [Takano, Wataru; Nakamura, Yoshihiko] Univ Tokyo, Mechanoinformat Dept, Sch
Informat Sci & Technol, Tokyo, Japan.
RP Takano, W (reprint author), Univ Tokyo, Mechanoinformat Dept, Sch Informat Sci &
Technol, 7-3-1 Bunkyoku, Tokyo, Japan.
EM takano@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102
CR Argall BD, 2009, ROBOT AUTON SYST, V57, P469, DOI 10.1016/j.robot.2008.10.024
Asfour T., 2006, P IEEE RAS INT C HUM, P40
Billard AG, 2006, ROBOT AUTON SYST, V54, P370, DOI 10.1016/j.robot.2006.01.007
Blei DM, 2003, J MACH LEARN RES, V3, P993, DOI 10.1162/jmlr.2003.3.4-5.993
Breazeal C, 2002, TRENDS COGN SCI, V6, P481, DOI 10.1016/S1364-6613(02)02016-8
Brown P. F., 1993, Computational Linguistics, V19, P263
Calinon S, 2010, IEEE ROBOT AUTOM MAG, V17, P44, DOI 10.1109/MRA.2010.936947
Calinon S, 2006, IEEE INT CONF ROBOT, P2978, DOI 10.1109/ROBOT.2006.1642154
DEERWESTER S, 1990, J AM SOC INFORM SCI, V41, P391, DOI 10.1002/(SICI)1097-
4571(199009)41:6<391::AID-ASI1>3.0.CO;2-9
Donald M., 2011, ORIGIN MODERN MIND
Guerra-Filbo G, 2006, P 6 IEEE RAS INT C H, P69
Guerra-Filbo G., 2007, IEEE COMPUTER MAGAZI, V40, P60
Hemeren P., 2012, ARTIFICIAL GEN INTEL, P99
Herzog D., 2008, P BRIT MACH VIS C, P163
HOFMANN T, 1999, P 15 C UNC ART INT, P289
Ijspeert A. J., 2003, NEURAL INFORM PROCES, V15, P1547
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Kadone H., 2005, P 2005 IEEE RSJ INT, P2900
Kulic D, 2007, P IEEE INT C INT ROB, P2388
Ogata T., 2007, P IEEE RSJ INT C INT
Okada M, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1410, DOI 10.1109/ROBOT.2002.1014741
Rabiner L., 1993, FUNDAMENTALS SPEECH
RABINER LR, 1989, P IEEE, V77, P257, DOI 10.1109/5.18626
Rizzolatti G, 2001, NAT REV NEUROSCI, V2, P661, DOI 10.1038/35090060
Saussure Ferdinand, 1966, COURSE GEN LINGUISTI
Sugita Y, 2005, ADAPT BEHAV, V13, P33, DOI 10.1177/105971230501300102
Sugiura K., 2008, P 2008 IEEE RSJ INT, P852
TAKANE Y, 1977, PSYCHOMETRIKA, V42, P7, DOI 10.1007/BF02293745
TAKANO W, 2006, P IEEE RAS INT C HUM, P425
Takano W, 2007, IEEE INT CONF ROBOT, P3092, DOI 10.1109/ROBOT.2007.363942
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
NR 31
TC 6
Z9 6
U1 4
U2 10
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD JAN 17
PY 2015
VL 29
IS 2
BP 115
EP 126
DI 10.1080/01691864.2014.985611
PG 12
WC Robotics
SC Robotics
GA AZ9II
UT WOS:000348526700001
OA gold
DA 2018-01-22
ER

PT J
AU Mittendorfer, P
Yoshida, E
Cheng, G
AF Mittendorfer, P.
Yoshida, E.
Cheng, G.
TI Realizing whole-body tactile interactions with a self-organizing,
multi-modal artificial skin on a humanoid robot
SO ADVANCED ROBOTICS
LA English
DT Article
DE whole-body tactile interaction; artificial skin; self-organization;
humanoid robots
AB In this paper, we present a new approach to realize whole-body tactile
interactions with a self-organizing, multi-modal artificial skin on a humanoid
robot. We, therefore, equipped the whole upper body of the humanoid HRP-2 with
various patches of CellulARSkin - a modular artificial skin. In order to
automatically handle a potentially high number of tactile sensor cells and motors
units, the robot uses open-loop exploration motions, and distributed accelerometers
in the artificial skin cells, to acquire its self-centered sensory-motor knowledge.
This body self-knowledge is then utilized to transfer multi-modal tactile
stimulations into reactive body motions. Tactile events provide feedback on changes
of contact on the whole-body surface. We demonstrate the feasibility of our
approach on a humanoid, here HRP-2, grasping large and unknown objects only via
tactile feedback. Kinesthetically taught grasping trajectories, are reactively
adapted to the size and stiffness of different test objects. Our paper contributes
the first realization of a self-organizing tactile sensor-behavior mapping on a
full-sized humanoid robot, enabling a position controlled robot to compliantly
handle objects.
C1 [Mittendorfer, P.; Cheng, G.] Tech Univ Munich, Inst Cognit Syst, Tactile
Sensing Grp, D-80290 Munich, Germany.
[Yoshida, E.] CNRS AIST, Joint Robot Lab, Tsukuba, Ibaraki, Japan.
RP Mittendorfer, P (reprint author), Tech Univ Munich, Inst Cognit Syst, Tactile
Sensing Grp, D-80290 Munich, Germany.
EM philipp.mittendorfer@tum.de
RI Yoshida, Eiichi/M-3756-2016
OI Yoshida, Eiichi/0000-0002-3077-6964; Cheng, Gordon/0000-0003-0770-8717
FU DFG cluster of excellence Cognition for Technical systems - CoTeSys;
Joint Robotics Laboratory, Tsukuba, Japan [UMI3218/CRT]
FX This work was in parts supported by a three-month research visit to
CNRS-AIST JRL (Joint Robotics Laboratory), UMI3218/CRT, Tsukuba, Japan
and by the DFG cluster of excellence Cognition for Technical systems -
CoTeSys. Many thanks to Thomas Moulard and Pierre Gergondet, for helping
with the robot HRP-2, and the whole CNRS/AIST team.
CR Dahiya RS, 2013, IEEE SENS J, V13, P4121, DOI 10.1109/JSEN.2013.2279056
Dahiya RS, 2010, IEEE T ROBOT, V26, P1, DOI 10.1109/TRO.2009.2033627
Fuke S, 2007, INT J HUM ROBOT, V4, P347, DOI 10.1142/S0219843607001096
Harmon L. D., 1982, INT J ROBOT RES, V1, P3, DOI [10.1177/027836498200100201,
DOI 10.1177/027836498200100201]
Hoffmann M, 2010, IEEE T AUTON MENT DE, V2, P304, DOI 10.1109/TAMD.2010.2086454
Kuniyoshi Y, 2004, LECT NOTES ARTIF INT, V3139, P202
Mittendorfer P, 2011, IEEE T ROBOT, V27, P401, DOI 10.1109/TRO.2011.2106330
Mukai T, 2008, IEEE T ROBOT, V24, P505, DOI 10.1109/TRO.2008.917006
Olsson LA, 2006, CONNECT SCI, V18, P121, DOI 10.1080/09540090600768542
Romano JM, 2011, IEEE T ROBOT, V27, P1067, DOI 10.1109/TRO.2011.2162271
Someya T, 2004, P NATL ACAD SCI USA, V101, P9966, DOI 10.1073/pnas.0401918101
Strohmayr M., 2012, THESIS KARLSRUHER I
NR 12
TC 14
Z9 15
U1 0
U2 8
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD JAN 2
PY 2015
VL 29
IS 1
SI SI
BP 51
EP 67
DI 10.1080/01691864.2014.952493
PG 17
WC Robotics
SC Robotics
GA CA3GU
UT WOS:000348795600001
DA 2018-01-22
ER

PT J
AU Yonekura, K
Kim, CH
Nakadai, K
Tsujino, H
Yokoi, K
AF Yonekura, Kenta
Kim, Chyon Hae
Nakadai, Kazuhiro
Tsujino, Hiroshi
Yokoi, Kazuhito
TI Prevention of accomplishing synchronous multi-modal human-robot
cooperation by using visual rhythms
SO ADVANCED ROBOTICS
LA English
DT Article
DE multimodal interaction; physical cooperation; HRI; HCI; humanoid
ID IMPEDANCE CONTROL; COLLABORATION; MANIPULATION; SYSTEM; OBJECT
AB Effective rhythm expression (ERE) from a robot to a human performing a multi-
modal physical-cooperation (MMPC) task was accomplished. During MMPC, the
participant and the robot can exchange various rhythms between themselves. To
accomplish ERE, it is important to appropriately select the modalities in which the
robot expresses its desired rhythm. Accordingly, a hypothesis that selecting the
visual rhythm for RE prevents MMPC was tested and confirmed. This hypothesis was
preliminarily confirmed in regard to MMPC between two humans. Moreover, to confirm
the hypothesis in regard to human-robot cooperation, an experimental system
(including a rope-turning robot) was developed. This experimental system allowed
experimenters to evaluate the synchronization during MMPC between a human and a
robot by measuring the norm of angular velocities. The results of several
experiments (namely, a comparison of norms), in which several combinations of
visual and auditory rhythmic stimuli were presented to the subjects, strongly
supported the hypothesis.
C1 [Yonekura, Kenta; Yokoi, Kazuhito] Univ Tsukuba, Dept Intelligent Interact
Technol, 1-1-1 Tennodai, Tsukuba, Ibaraki 3050006, Japan.
[Yonekura, Kenta; Yokoi, Kazuhito] Natl Inst Adv Ind Sci & Technol, Intelligent
Syst Res Inst IS, Tsukuba, Ibaraki 3058568, Japan.
[Kim, Chyon Hae] Iwate Univ, Dept Elect Engn & Comp Sci, Morioka, Iwate 0200066,
Japan.
[Nakadai, Kazuhiro; Tsujino, Hiroshi] Honda Res Inst Japan Co Ltd, Wako, Saitama
3510188, Japan.
RP Yonekura, K (reprint author), Univ Tsukuba, Dept Intelligent Interact Technol,
1-1-1 Tennodai, Tsukuba, Ibaraki 3050006, Japan.
EM yoneken@ieee.org
RI Yokoi, Kazuhito/K-2046-2012
OI Yokoi, Kazuhito/0000-0003-3942-2027; Kim, Chyon Hae/0000-0002-7360-4986
FU [24-1136]
FX This study was partially supported by a Grant-in-Aid for JSPS Fellows
[24-1136].
CR Bakar Shahriman Abu, 2010, Journal of Biomechanical Science and Engineering, V5,
P104, DOI 10.1299/jbse.5.104
Dannenberg RB, 1985, INT COMP MUS C, P193
Deliege I, 1997, PERCEPTION COGNITION, P202
Evrard P, 2009, WORLD HAPTICS 2009: THIRD JOINT EUROHAPTICS CONFERENCE AND
SYMPOSIUM ON HAPTIC INTERFACES FOR VIRTUAL ENVIRONMENT AND TELEOPERATOR SYSTEMS,
PROCEEDINGS, P45, DOI 10.1109/WHC.2009.4810879
Fitzpatrick P, 2006, INTERACT STUD, V7, P171, DOI 10.1075/is.7.2.05fit
FLASH T, 1985, J NEUROSCI, V5, P1688
Gillespie RB, 2001, IEEE T ROBOTIC AUTOM, V17, P391, DOI 10.1109/70.954752
Hirata Y., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P458, DOI 10.1109/ROBOT.2000.844097
Hoffman G, 2010, IEEE INT CONF ROBOT, P582, DOI 10.1109/ROBOT.2010.5509182
HOGAN N, 1985, J DYN SYST-T ASME, V107, P1, DOI 10.1115/1.3140702
Ikeura R., 1994, Proceedings. 3rd IEEE International Workshop on Robot and Human
Communication. RO-MAN '94 Nagoya (Cat. No.94TH0679-1), P112, DOI
10.1109/ROMAN.1994.365946
Kheddar A, 2011, C HUM SYST INTERACT, P268, DOI 10.1109/HSI.2011.5937377
Kim CH, 2011, ADV ROBOTICS, V25, P491, DOI 10.1163/016918610X551791
Kosuge K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P3459
KOSUGE K, 1994, IEEE IND ELEC, P713, DOI 10.1109/IECON.1994.397872
Kotosaka S., 2001, J ROBOTICS SOC JAPAN, V19, P116
KRUSKAL WH, 1952, J AM STAT ASSOC, V47, P583
Lotze M, 1999, CORTEX, V35, P89, DOI 10.1016/S0010-9452(08)70787-1
Maeda Y, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P2240, DOI
10.1109/IROS.2001.976403
Maeda Y, 2001, IEEE INT CONF ROBOT, P3477, DOI 10.1109/ROBOT.2001.933156
Michalowski M. P., 2007, P ACM IEEE INT C HUM, P89, DOI DOI
10.1145/1228716.1228729
Mortl A, 2012, INT J ROBOT RES, V31, P1656, DOI 10.1177/0278364912455366
MURATA K, 2008, HUM 2008 8 IEEE RAS, P79
NELSON LS, 1989, J QUAL TECHNOL, V21, P140
Paradiso JA, 1997, OPTICAL 3D MEASUREME, VIV, P11
Reed KB, 2008, IEEE T HAPTICS, V1, P108, DOI 10.1109/ToH.2008.13
SCHNEIDER SA, 1992, IEEE T ROBOTIC AUTOM, V8, P383, DOI 10.1109/70.143355
Solis J, 2009, ADV ROBOTICS, V23, P1849, DOI [10.1163/016918609X12518783330207,
10.1163/016918609\12518783330207]
Takahashi A, 1996, PROCEEDINGS OF THE ASP-DAC '97 - ASIA AND SOUTH PACIFIC
DESIGN AUTOMATION CONFERENCE 1997, P37, DOI 10.1109/IROS.1996.570624
Vercoe B, 1985, INT COMP MUS C, P275
Yonekura K, 2012, EURASIP J AUDIO SPEE, P1, DOI 10.1186/1687-4722-2012-12
Yoshii Kazuyoshi, 2007, 2007 IEEE/RSJ International Conference on Intelligent
Robots and Systems, P1743, DOI 10.1109/IROS.2007.4399244
NR 32
TC 0
Z9 0
U1 2
U2 8
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2015
VL 29
IS 14
BP 901
EP 912
DI 10.1080/01691864.2015.1031280
PG 12
WC Robotics
SC Robotics
GA DG4TO
UT WOS:000372065800001
DA 2018-01-22
ER

PT J
AU Kamide, H
Kawabe, K
Shigemi, S
Arai, T
AF Kamide, Hiroko
Kawabe, Koji
Shigemi, Satoshi
Arai, Tatsuo
TI Anshin as a concept of subjective well-being between humans and robots
in Japan
SO ADVANCED ROBOTICS
LA English
DT Article
DE Anshin; humanoid robots; psychological measurement
AB In this article, we aim to discover the basic factors for determining Anshin of
humanoids from the viewpoint of potential users and to develop a new psychological
scale to measure the degree of Anshin quantitatively. Anshin is a prevailing
concept of subjective well-being that Japanese people feel toward their life with
artificial products including service robots. To examine the factors that determine
Anshin of humanoids from a lay person's perspective, we studied the responses of
919 Japanese who observed movies of 11 humanoids and then freely described their
impressions about what Anshin of each humanoid meant to them. The descriptions were
classified into several categories to develop the items of a new scale.
Subsequently, 2624 different Japanese participants evaluated the same 11 humanoids
using the new scale. Factor analysis revealed five factors of Anshin: Comfort,
Performance, Peace of mind, Controllability, and Robot-likeness. The usability and
implications of the new scale are discussed.
C1 [Kamide, Hiroko; Arai, Tatsuo] Osaka Univ, Grad Sch Engn Sci, 1-3
Machikaneyamacho, Toyonaka, Osaka 5608531, Japan.
[Kawabe, Koji; Shigemi, Satoshi] Honda Res & Dev Co Ltd, Wako, Saitama 3510188,
Japan.
RP Kamide, H (reprint author), Osaka Univ, Grad Sch Engn Sci, 1-3 Machikaneyamacho,
Toyonaka, Osaka 5608531, Japan.
EM kamide@arai-lab.sys.es.osaka-u.ac.jp
FU JSPS KAKENHI [15K13114, 15K01582, 26705008]
FX This work was supported by JSPS KAKENHI [grant number 15K13114], [grant
number 15K01582], and [grant number 26705008].
CR Bartneck C, 2009, INT J SOC ROBOT, V1, P71, DOI 10.1007/s12369-008-0001-3
Bartneck C, 2008, 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, VOLS 1 AND 2, P553, DOI 10.1109/ROMAN.2008.4600724
Chikaraishi T, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P326, DOI
10.1109/IROS.2008.4650899
COVELLO VT, 1987, ADV RISK ANAL, V4, P221
Hattori T, 2012, IEEE INT C INT ROBOT, P5400, DOI 10.1109/IROS.2012.6385769
Inoue K., 2007, 13 INT C ADV ROB ICA, P1005
Kamide Hiroko, 2010, P 2010 IEEE RSJ INT
Kamide H, 2014, ADV ROBOTICS, V28, P821, DOI 10.1080/01691864.2014.893837
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kanda T, 2001, IEEE INT CONF ROBOT, P4166, DOI 10.1109/ROBOT.2001.933269
Kaneko F., 2004, P IEEE C ROB AUT, P1083
Kaneko K, 2009, P IEEE RAS INT C HUM, P7, DOI DOI 10.1109/ICHR.2009.5379537
Kawauchi Naoto, 2003, MITSUBISHI JUKO GIHO, V40, P270
KOIZUMI S, 2006, IEEE INT WORKSH ROB
Minato T., 2009, P WORKSH SYN INT APP
Minato T, 2007, IEEE-RAS INT C HUMAN, P557, DOI 10.1109/ICHR.2007.4813926
Mori M., 1970, ENERGY, V7, P33, DOI DOI 10.1109/MRA.2012.2192811
Muto Y., 2009, P IEEE INT S ROB HUM
Nakaoka S., 2009, IEEE RAS INT C HUM R
Negi S, 2008, 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, VOLS 1 AND 2, P719, DOI 10.1109/ROMAN.2008.4600752
Ogawa K, 2009, 18 INT S ROB HUM INT, P553
Schmidt M, 2004, LOSS AGROBIODIVERSIT
Shigemi S, HONDA TECH REV, V18, P38
Shiomi M, 2014, INT J SOC ROBOT, V6, P443, DOI 10.1007/s12369-014-0238-y
Takei T, 2010, Proceedings of the 2010 IEEE/SICE International Symposium on
System Integration (SII 2010), P43, DOI 10.1109/SII.2010.5708299
Yamagishi Toshio, 1998, STRUCTURE TRUST EVOL
NR 26
TC 2
Z9 2
U1 2
U2 4
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2015
VL 29
IS 24
BP 1624
EP 1636
DI 10.1080/01691864.2015.1079503
PG 13
WC Robotics
SC Robotics
GA DF4FG
UT WOS:000371302700005
DA 2018-01-22
ER

PT J
AU Takahashi, K
Ogata, T
Tjandra, H
Yamaguchi, Y
Sugano, S
AF Takahashi, Kuniyuki
Ogata, Tetsuya
Tjandra, Hadi
Yamaguchi, Yuki
Sugano, Shigeki
TI Tool-Body Assimilation Model Based on Body Babbling and Neurodynamical
System
SO MATHEMATICAL PROBLEMS IN ENGINEERING
LA English
DT Article
ID NEURAL-NETWORKS; AFFORDANCES; PERCEPTION; ROBOTICS; VISION
AB We propose the new method of tool use with a tool-body assimilation model based
on body babbling and a neurodynamical system for robots to use tools. Almost all
existing studies for robots to use tools require predetermined motions and tool
features; the motion patterns are limited and the robots cannot use novel tools.
Other studies fully search for all available parameters for novel tools, but this
leads to massive amounts of calculations. To solve these problems, we took the
following approach: we used a humanoid robot model to generate random motions based
on human body babbling. These rich motion experiences were used to train recurrent
and deep neural networks for modeling a body image. Tool features were self-
organized in parametric bias, modulating the body image according to the tool in
use. Finally, we designed a neural network for the robot to generate motion only
from the target image. Experiments were conducted with multiple tools for
manipulating a cylindrical target object. The results show that the tool-body
assimilation model is capable of motion generation.
C1 [Takahashi, Kuniyuki; Tjandra, Hadi; Sugano, Shigeki] Waseda Univ, Grad Sch
Creat Sci & Engn, Tokyo 1698555, Japan.
[Ogata, Tetsuya] Waseda Univ, Grad Sch Fundamental Sci & Engn, Tokyo 1698555,
Japan.
[Yamaguchi, Yuki] Kyoto Univ, Grad Sch Informat, Kyoto 6068501, Japan.
RP Takahashi, K (reprint author), Waseda Univ, Grad Sch Creat Sci & Engn, Tokyo
1698555, Japan.
EM takahashi@sugano.mech.waseda.ac.jp
OI Takahashi, Kuniyuki/0000-0001-7912-4134; Ogata,
Tetsuya/0000-0001-7015-0379
FU JST PRESTO "Information Environment and Humans," MEXT [24119003]; JSPS
"Fundamental Study for Intelligent Machine to Coexist with Nature"
Research Institute for Science and Engineering, Waseda University
[2522005]; Grants for Excellent Graduate Schools, MEXT, Japan
FX The work has been supported by JST PRESTO "Information Environment and
Humans," MEXT Grant-in-Aid for Scientific Research on Innovative Areas
"Constructive Developmental Science" (24119003), JSPS Grant-in-Aid for
Scientific Research (S)(2522005), "Fundamental Study for Intelligent
Machine to Coexist with Nature" Research Institute for Science and
Engineering, Waseda University, and Grants for Excellent Graduate
Schools, MEXT, Japan. The authors would like to thank H. Arie, K. Noda,
and S. Murata for their help in conducting the experiments.
CR Arie H, 2009, NEW MATH NAT COMPUT, V5, P307, DOI 10.1142/S1793005709001283
Asada M, 2009, IEEE T AUTON MENT DE, V1, P12, DOI 10.1109/TAMD.2009.2021702
Caligiore D., 2008, P INT C COGN SYST CO
Detry R., 2009, P IEEE INT C DEV LEA, P1
Gibson JJ, 1966, SENSES CONSIDERED PE
Hikita M., 2008, P IEEE RSJ INT C IRO, P2041
Hinton GE, 2006, SCIENCE, V313, P504, DOI 10.1126/science.1127647
Hinton G, 2012, IEEE SIGNAL PROC MAG, V29, P82, DOI 10.1109/MSP.2012.2205597
Jordan M., 1986, P 8 ANN C COGN SCI S, P513
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
KAPANDJI IA, 1980, PHYSL ARTICULAIRE
Kawai Y, 2012, IEEE INT C INT ROBOT, P5159, DOI 10.1109/IROS.2012.6386102
Kohonen T., 1988, SELF ORG ASS MEMORY, V8
Krizhevsky A., 2012, ADV NEURAL INFORM PR, V24, P1
Maravita A, 2004, TRENDS COGN SCI, V8, P79, DOI 10.1016/j.tics.2003.12.008
Martens J., 2010, P 27 INT C MACH LEAR, V951, P2010
Michaels CF, 2007, PERCEPTION, V36, P750, DOI 10.1068/p5593
Mochizuki K., 2013, P INT C SYST MAN CYB, P2337
Montesano L, 2008, IEEE T ROBOT, V24, P15, DOI 10.1109/TRO.2007.914848
Nabeshima C., 2007, P 6 IEEE INT C DEV L, P288
Namikawa J, 2011, PLOS COMPUT BIOL, V7, DOI 10.1371/journal.pcbi.1002221
Nishide S, 2012, IEEE T AUTON MENT DE, V4, P139, DOI 10.1109/TAMD.2011.2177660
Noda K., 2013, P IEEE RSJ INT C INT
Noda K., 2013, P IEEE INT C ONS MAN
PEARLMUTTER BA, 1994, NEURAL COMPUT, V6, P147, DOI 10.1162/neco.1994.6.1.147
Pfeifer R, 2007, BODY SHAPES WAY WE T
Rumelhart D. E., 1986, PARALLEL DISTRIBUTED
Saegusa R, 2009, 2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS,
VOLS 1-4, P794, DOI 10.1109/ROBIO.2009.4913101
Sasaoka T, 2011, ADVANCES IN COGNITIVE NEURODYNAMICS (II), P345, DOI
10.1007/978-90-481-9695-1_55
Saxena A, 2008, INT J ROBOT RES, V27, P157, DOI 10.1177/0278364907087172
Schraudolph NN, 2002, NEURAL COMPUT, V14, P1723, DOI 10.1162/08997660260028683
Song D., 2010, P IEEE RSJ INT C INT, P1579
Stoytchev A, 2005, IEEE INT CONF ROBOT, P3060
Streri A, 2005, INFANT BEHAV DEV, V28, P290, DOI 10.1016/j.infbeh.2005.05.004
Sturm J, 2008, IEEE INT CONF ROBOT, P3328, DOI 10.1109/ROBOT.2008.4543718
Takahashi K., 2014, IEEE ASME INT C ADV
Takahashi K., 2014, LECT NOTES COMPUTER
Tikhanoff V., 2013, P IEEE RAS INT C HUM
Yamashita Y, 2008, PLOS COMPUT BIOL, V4, DOI 10.1371/journal.pcbi.1000220
NR 39
TC 1
Z9 1
U1 0
U2 3
PU HINDAWI PUBLISHING CORPORATION
PI NEW YORK
PA 410 PARK AVENUE, 15TH FLOOR, #287 PMB, NEW YORK, NY 10022 USA
SN 1024-123X
EI 1563-5147
J9 MATH PROBL ENG
JI Math. Probl. Eng.
PY 2015
AR 837540
DI 10.1155/2015/837540
PG 15
WC Engineering, Multidisciplinary; Mathematics, Interdisciplinary
Applications
SC Engineering; Mathematics
GA CE1AW
UT WOS:000351544700001
OA gold
DA 2018-01-22
ER

PT J
AU Pransky, J
AF Pransky, Joanne
TI The Pransky interview: Dr Rodney Brooks, Robotics Entrepreneur, Founder
and CTO of Rethink Robotics
SO INDUSTRIAL ROBOT-AN INTERNATIONAL JOURNAL
LA English
DT Editorial Material
DE Control; Mobile robots; Humanoid robots; Machine intelligence;
Artificial Intelligence; Cooperative robots
AB Purpose - This article, a "Q& A interview" conducted by Joanne Pransky of
Industrial Robot Journal, aims to impart the combined technological, business, and
personal experience of a prominent, robotic industry engineer-turned entrepreneur
regarding the evolution, commercialization, and challenges of bringing a
technological invention to market.
Design/methodology/approach - The interviewee is Dr Rodney Brooks, the Panasonic
Professor of Robotics (emeritus), Massachusetts Institute of Technology (MIT),
Computer Science and Artificial Intelligence Lab; Founder, Chief Technical Officer
(CTO) and Chairman of Rethink Robotics. Dr Brooks shares some of his underlying
principles in technology, academia and business, as well as past and future
challenges.
Findings - Dr Brooks received degrees in pure mathematics from the Flinders
University of South Australia and a PhD in computer science from Stanford
University in 1981. He held research positions at Carnegie Mellon University and
MIT, and a faculty position at Stanford before joining the faculty of MIT in 1984.
He is also a Founder, Board Member and former CTO (1991-2008) of iRobot Corp
(Nasdaq: IRBT). Dr Brooks is the former Director (1997-2007) of the MIT Artificial
Intelligence Laboratory and then the MIT Computer Science & Artificial Intelligence
Laboratory. He founded Rethink Robotics (formerly Heartland Robotics) in 2008.
Originality/value - While at MIT, in 1988, Dr Brooks built Genghis, a hexapodal
walker, designed for space exploration (which was on display for ten years in the
Smithsonian National Air and Space Museum in Washington, D.C.). Genghis was one of
the first robots that utilized Brooks' pioneering subsumption architecture. Dr
Brooks' revolutionary behavior- based approach underlies the autonomous robots of
iRobot, which has sold more than 12 million home robots worldwide, and has deployed
more than 5,000 defense and security robots; and Rethink Robotics' Baxter, the
world's first interactive production robot. Dr Brooks has won the Computers and
Thought Award at the 1991 International Joint Conference on Artificial
Intelligence, the 2008 IEEE Inaba Technical Award for Innovation Leading to
Production, the 2014 Robotics Industry Association's Engelberger Robotics Award for
Leadership and the 2015 IEEE Robotics and Automation Award.
C1 [Pransky, Joanne] Sankyo Robot, Tokyo, Japan.
EM drjoanne@robot.md
NR 0
TC 0
Z9 0
U1 2
U2 21
PU EMERALD GROUP PUBLISHING LIMITED
PI BINGLEY
PA HOWARD HOUSE, WAGON LANE, BINGLEY BD16 1WA, W YORKSHIRE, ENGLAND
SN 0143-991X
EI 1758-5791
J9 IND ROBOT
JI Ind. Robot
PY 2015
VL 42
IS 1
BP 1
EP 4
DI 10.1108/IR-10-2014-0406
PG 4
WC Engineering, Industrial; Robotics
SC Engineering; Robotics
GA CB5CM
UT WOS:000349645100001
DA 2018-01-22
ER

PT J
AU Wang, J
Jiu, JT
Nogi, M
Sugahara, T
Nagao, S
Koga, H
He, P
Suganuma, K
AF Wang, Jun
Jiu, Jinting
Nogi, Masaya
Sugahara, Tohru
Nagao, Shijo
Koga, Hirotaka
He, Peng
Suganuma, Katsuaki
TI A highly sensitive and flexible pressure sensor with electrodes and
elastomeric interlayer containing silver nanowires
SO NANOSCALE
LA English
DT Article
ID PERCOLATION-THRESHOLD; CARBON NANOTUBES; DIELECTRIC-PROPERTIES; STRAIN
SENSORS; TRANSPARENT; COMPOSITE; TOUCH; SKIN; POLYURETHANE; CONSTANT
AB The next-generation application of pressure sensors is gradually being extended
to include electronic artificial skin (e-skin), wearable devices, humanoid robotics
and smart prosthetics. In these advanced applications, high sensing capability is
an essential feature for high performance. Although surface patterning treatments
and some special elastomeric interlayers have been applied to improve sensitivity,
the process is complex and this inevitably raises the cost and is an obstacle to
large-scale production. In the present study a simple printing process without
complex patterning has been used for constructing the sensor, and an interlayer is
employed comprising elastomeric composites filled with silver nanowires. By
increasing the relative permittivity, er, of the composite interlayer induced by
compression at high nanowire concentration, it has been possible to achieve a
maximum sensitivity of 5.54 kPa(-1). The improvement in sensitivity did not
sacrifice or undermine the other features of the sensor. Thanks to the silver
nanowire electrodes, the sensor is flexible and stable after 200 cycles at a
bending radius of 2 mm, and exhibits outstanding reproducibility without hysteresis
under similar pressure pulses. The sensor has been readily integrated onto an
adhesive bandage and has been successful in detecting human movements. In addition
to measuring pressure in direct contact, non-contact pressures such as air flow can
also be detected.
C1 [Wang, Jun; He, Peng] Harbin Inst Technol, State Key Lab Adv Welding & Joining,
Harbin 150001, Peoples R China.
[Wang, Jun; Jiu, Jinting; Nogi, Masaya; Sugahara, Tohru; Nagao, Shijo; Koga,
Hirotaka; Suganuma, Katsuaki] Osaka Univ, Inst Sci & Ind Res, Osaka, Ibaraki
5670047, Japan.
RP Jiu, JT (reprint author), Osaka Univ, Inst Sci & Ind Res, Osaka, Ibaraki
5670047, Japan.
EM jiu@eco.sanken.osaka-u.ac.jp; hithepeng@hit.edu.cn
RI Nagao, Shijo/E-4870-2012; Koga, Hirotaka/F-6308-2016
OI Nagao, Shijo/0000-0002-2764-3458; Koga, Hirotaka/0000-0001-6295-1731
FU Showa Denko Co. Ltd; COI Stream Project; China Scholarship Council;
[24226017]
FX This study was partly supported by Showa Denko Co. Ltd, Grant-in-Aid for
Scientific Research (Kaken S, 24226017) and the COI Stream Project. J.
Wang wishes to express his gratitude for the financial support provided
by the China Scholarship Council for PhD research at Osaka University.
CR Cai L, 2013, SCI REP-UK, V3, DOI 10.1038/srep03048
Choi W, 2014, APPL PHYS LETT, V104, DOI 10.1063/1.4869816
Gelves GA, 2006, ADV FUNCT MATER, V16, P2423, DOI 10.1002/adfm.200600336
GRANNAN DM, 1981, PHYS REV LETT, V46, P375, DOI 10.1103/PhysRevLett.46.375
Hou CY, 2014, ADV MATER, V26, P5018, DOI 10.1002/adma.201401367
Hu WL, 2013, APPL PHYS LETT, V102, DOI 10.1063/1.4794143
Inoue M., 2008, J JPN I ELECT PACKAG, V11, P136
Jiu J, 2014, J MATER CHEM A, V2, P6326, DOI 10.1039/c4ta00502c
Karabanova LV, 2005, J MATER CHEM, V15, P499, DOI 10.1039/b410178b
Li J, 2007, ADV FUNCT MATER, V17, P3207, DOI 10.1002/adfm.200700065
Li RZ, 2014, ACS APPL MATER INTER, V6, P21721, DOI 10.1021/am506987w
Liang XW, 2014, J NANOPART RES, V16, DOI 10.1007/s11051-014-2578-9
Lin CA, 2010, J APPL PHYS, V108, DOI 10.1063/1.3457351
Lipomi DJ, 2011, NAT NANOTECHNOL, V6, P788, DOI [10.1038/NNANO.2011.184,
10.1038/nnano.2011.184]
Lucarotti C, 2013, SENSORS-BASEL, V13, P1435, DOI 10.3390/s130201435
Mannsfeld SCB, 2010, NAT MATER, V9, P859, DOI [10.1038/nmat2834,
10.1038/NMAT2834]
Mayousse C, 2013, NANOTECHNOLOGY, V24, DOI 10.1088/0957-4484/24/21/215501
Mi HY, 2014, MATER DESIGN, V56, P398, DOI 10.1016/j.matdes.2013.11.029
Qi L, 2005, ADV MATER, V17, P1777, DOI 10.1002/adma.200401816
Rogers JA, 2010, SCIENCE, V327, P1603, DOI 10.1126/science.1182383
Tsangaris GM, 1998, J MATER SCI, V33, P2027, DOI 10.1023/A:1004398514901
Wang C, 2013, NAT MATER, V12, P899, DOI [10.1038/nmat3711, 10.1038/NMAT3711]
Wang J., 2014, NANO-MICRO LETT, DOI [10.1007/s40820-014-0018-0, DOI
10.1007/S40820-014-0018-0]
Wang L, 2005, APPL PHYS LETT, V87, DOI 10.1063/1.1996842
Wang XL, 2013, J MATER CHEM A, V1, P3580, DOI 10.1039/c3ta00079f
Wongtimnoi K, 2011, COMPOS SCI TECHNOL, V71, P885, DOI
10.1016/j.compscitech.2011.02.003
Yao S., 2013, NANOSCALE, V6, P2345
Zhao XD, 2014, MATER DESIGN, V56, P807, DOI 10.1016/j.matdes.2013.11.073
NR 28
TC 54
Z9 55
U1 15
U2 156
PU ROYAL SOC CHEMISTRY
PI CAMBRIDGE
PA THOMAS GRAHAM HOUSE, SCIENCE PARK, MILTON RD, CAMBRIDGE CB4 0WF, CAMBS,
ENGLAND
SN 2040-3364
EI 2040-3372
J9 NANOSCALE
JI Nanoscale
PY 2015
VL 7
IS 7
BP 2926
EP 2932
DI 10.1039/c4nr06494a
PG 7
WC Chemistry, Multidisciplinary; Nanoscience & Nanotechnology; Materials
Science, Multidisciplinary; Physics, Applied
SC Chemistry; Science & Technology - Other Topics; Materials Science;
Physics
GA CB2QQ
UT WOS:000349473200014
PM 25588044
DA 2018-01-22
ER

PT J
AU Tangkaratt, V
Xie, N
Sugiyama, M
AF Tangkaratt, Voot
Xie, Ning
Sugiyama, Masashi
TI Conditional Density Estimation with Dimensionality Reduction via
Squared-Loss Conditional Entropy Minimization
SO NEURAL COMPUTATION
LA English
DT Article
ID INVERSE REGRESSION; DIVERGENCE
AB Regression aims at estimating the conditional mean of output given input.
However, regression is not informative enough if the conditional density is
multimodal, heteroskedastic, and asymmetric. In such a case, estimating the
conditional density itself is preferable, but conditional density estimation (CDE)
is challenging in high-dimensional space. A naive approach to coping with high
dimensionality is to first perform dimensionality reduction (DR) and then execute
CDE. However, a two-step process does not perform well in practice because the
error incurred in the first DR step can be magnified in the second CDE step. In
this letter, we propose a novel single-shot procedure that performs CDE and DR
simultaneously in an integrated way. Our key idea is to formulate DR as the problem
of minimizing a squared-loss variant of conditional entropy, and this is solved
using CDE. Thus, an additional CDE step is not needed after DR. We demonstrate the
usefulness of the proposed method through extensive experiments on various data
sets, including humanoid robot transition and computer art.
C1 [Tangkaratt, Voot; Xie, Ning; Sugiyama, Masashi] Tokyo Inst Technol, Dept Comp
Sci, Meguro Ku, Tokyo 1528552, Japan.
RP Tangkaratt, V (reprint author), Tokyo Inst Technol, Dept Comp Sci, Meguro Ku,
Tokyo 1528552, Japan.
EM voot@sg.cs.titech.ac.jp; xie@sg.cs.titech.ac.jp; sugi@cs.titech.ac.jp
CR ALI SM, 1966, J ROY STAT SOC B, V28, P131
Amari S, 1998, NEURAL COMPUT, V10, P251, DOI 10.1162/089976698300017746
Bache K., 2013, UCI MACHINE LEARNING
Basu A, 1998, BIOMETRIKA, V85, P549, DOI 10.1093/biomet/85.3.549
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Cook RD, 2005, J AM STAT ASSOC, V100, P410, DOI 10.1198/016214504000001501
Csiszar I., 1967, STUD SCI MATH HUNG, V2, P229
Edelman A, 1998, SIAM J MATRIX ANAL A, V20, P303, DOI 10.1137/S0895479895290954
Fukumizu K, 2009, ANN STAT, V37, P1871, DOI 10.1214/08-AOS637
Huber P. J., 1981, ROBUST STAT
Keziou A, 2003, CR MATH, V336, P857, DOI 10.1016/S1631-073X(03)00215-2
KULLBACK S, 1951, ANN MATH STAT, V22, P79, DOI 10.1214/aoms/1177729694
LI KC, 1991, J AM STAT ASSOC, V86, P316, DOI 10.2307/2290563
Nguyen XL, 2010, IEEE T INFORM THEORY, V56, P5847, DOI 10.1109/TIT.2010.2068870
Pearson K, 1900, PHILOS MAG, V50, P157, DOI 10.1080/14786440009463897
Reich BJ, 2011, BIOMETRICS, V67, P886, DOI 10.1111/j.1541-0420.2010.01501.x
Sugiyama M., 2010, P 13 INT C ART INT S, P781
Sugiyama M, 2012, ANN I STAT MATH, V64, P1009, DOI 10.1007/s10463-011-0343-8
Sutton RS, 1998, REINFORCEMENT LEARNI
Suzuki T, 2013, NEURAL COMPUT, V25, P725, DOI 10.1162/NECO_a_00407
Wang H, 2008, J AM STAT ASSOC, V103, P811, DOI 10.1198/016214508000000418
Xia YC, 2007, ANN STAT, V35, P2654, DOI 10.1214/009053607000000352
XIE N., 2012, P INT C MACH LEARN, P153
NR 23
TC 3
Z9 3
U1 0
U2 3
PU MIT PRESS
PI CAMBRIDGE
PA ONE ROGERS ST, CAMBRIDGE, MA 02142-1209 USA
SN 0899-7667
EI 1530-888X
J9 NEURAL COMPUT
JI Neural Comput.
PD JAN
PY 2015
VL 27
IS 1
BP 228
EP 254
DI 10.1162/NECO_a_00683
PG 27
WC Computer Science, Artificial Intelligence; Neurosciences
SC Computer Science; Neurosciences & Neurology
GA AX2GY
UT WOS:000346762700010
PM 25380340
DA 2018-01-22
ER

PT J
AU Otani, T
Hashimoto, K
Yahara, M
Miyamae, S
Isomichi, T
Hanawa, S
Sakaguchi, M
Kawakami, Y
Lim, HO
Takanishi, A
AF Otani, Takuya
Hashimoto, Kenji
Yahara, Masaaki
Miyamae, Shunsuke
Isomichi, Takaya
Hanawa, Shintaro
Sakaguchi, Masanori
Kawakami, Yasuo
Lim, Hun-Ok
Takanishi, Atsuo
TI Utilization of human-like pelvic rotation for running robot
SO FRONTIERS IN ROBOTICS AND AI
LA English
DT Article
DE humanoid; human motion analysis; running; pelvis; joint elasticity
AB The spring loaded inverted pendulum is used to model human running. It is based
on a characteristic feature of human running, in which the linear-spring-like
motion of the standing leg is produced by the joint stiffness of the knee and
ankle. Although this model is widely used in robotics, it does not include human-
like pelvic motion. In this study, we show that the pelvis actually contributes to
the increase in jumping force and absorption of landing impact. On the basis of
this finding, we propose a new model, spring loaded inverted pendulum with pelvis,
to improve running in humanoid robots. The model is composed of a body mass, a
pelvis, and leg springs, and, it can control its springs while running by use of
pelvic movement in the frontal plane. To achieve running motions, we developed a
running control system that includes a pelvic oscillation controller to attain
control over jumping power and a landing placement controller to adjust the running
speed. We also developed a new running robot by using the SLIP2 model and performed
hopping and running experiments to evaluate the model. The developed robot could
accomplish hopping motions only by pelvic movement. The results also established
that the difference between the pelvic rotational phase and the oscillation phase
of the vertical mass displacement affects the jumping force. In addition, the robot
demonstrated the ability to run with a foot placement controller depending on the
reference running speed.
C1 [Otani, Takuya] Waseda Univ, Grad Sch Adv Sci & Engn, Tokyo, Japan.
[Otani, Takuya] Japan Soc Promot Sci, Tokyo, Japan.
[Hashimoto, Kenji] Waseda Inst Adv Study, Tokyo, Japan.
[Hashimoto, Kenji; Lim, Hun-Ok; Takanishi, Atsuo] Waseda Univ, Humanoid Robot
Inst, Tokyo, Japan.
[Yahara, Masaaki; Isomichi, Takaya] Waseda Univ, Grad Sch Creat Sci & Engn,
Tokyo, Japan.
[Miyamae, Shunsuke; Hanawa, Shintaro; Sakaguchi, Masanori; Kawakami, Yasuo]
Waseda Univ, Fac Sport Sci, Tokyo, Japan.
[Sakaguchi, Masanori] Univ Calgary, Fac Kinesiol, Calgary, AB, Canada.
[Lim, Hun-Ok] Kanagawa Univ, Fac Engn, Yokohama, Kanagawa, Japan.
[Takanishi, Atsuo] Waseda Univ, Dept Modern Mech Engn, Tokyo, Japan.
RP Otani, T (reprint author), Waseda Univ, Grad Sch Adv Sci & Engn, Shinjuku Ku,
41-304 17 Kikui Cho, Tokyo 1620044, Japan.
EM t-otan@takanishi.mech.waseda.ac.jp
FU MEXT/JSPS KAKENHI [25220005, 25709019]; Mizuho Foundation for the
Promotion of Sciences; SolidWorks Japan K.K; DYDEN Corporation; Cybernet
Systems Co., Ltd.; Research Institute for Science and Engineering,
Waseda University; Institute of Advanced Active Aging Research, Waseda
University; Humanoid Robotics Institute, Waseda University
FX This study was conducted with the support of the Research Institute for
Science and Engineering, Waseda University; Institute of Advanced Active
Aging Research, Waseda University, and as part of the humanoid project
at the Humanoid Robotics Institute, Waseda University. It was also
supported in part by the MEXT/JSPS KAKENHI Grant No. 25220005 and
25709019; Mizuho Foundation for the Promotion of Sciences; SolidWorks
Japan K.K.; DYDEN Corporation and Cybernet Systems Co., Ltd.; we thank
all of them for the financial and technical support provided.
CR BLICKHAN R, 1989, J BIOMECH, V22, P1217, DOI 10.1016/0021-9290(89)90224-8
CAVAGNA GA, 1988, J PHYSIOL-LONDON, V399, P81
CHAPMAN AE, 1983, J BIOMECH, V16, P69, DOI 10.1016/0021-9290(83)90047-7
Collins SH, 2009, P R SOC B, V276, P3679, DOI 10.1098/rspb.2009.0664
Dalleau G, 1998, EUR J APPL PHYSIOL O, V77, P257, DOI 10.1007/s004210050330
Farley CT, 1996, J BIOMECH, V29, P181, DOI 10.1016/0021-9290(95)00029-1
Ferber R, 2003, CLIN BIOMECH, V18, P350, DOI 10.1016/S0268-0033(03)00025-1
Grizzle JW, 2009, P AMER CONTR CONF, P2030, DOI 10.1109/ACC.2009.5160550
Gunther M, 2002, J BIOMECH, V35, P1459, DOI 10.1016/S0021-9290(02)00183-5
Hashimoto K, 2014, SCI WORLD J, DOI 10.1155/2014/259570
Hashimoto K, 2013, ADV ROBOTICS, V27, P541, DOI 10.1080/01691864.2013.777015
Hyon SH, 2003, P I MECH ENG I-J SYS, V217, P83, DOI 10.1243/095965103321512800
Kajita S., 2008, SPRINGER HDB ROBOTIC, P361, DOI DOI 10.1007/978-3-540-30301-
5_17
Lee SH, 2012, AUTON ROBOT, V33, P399, DOI 10.1007/s10514-012-9294-z
Lin ZH, 2011, IEEE ENG MED BIO, P6927, DOI 10.1109/IEMBS.2011.6091751
MCMAHON TA, 1990, J BIOMECH, V23, P65, DOI 10.1016/0021-9290(90)90042-2
Niiyama R, 2012, ADV ROBOTICS, V26, P383, DOI 10.1163/156855311X614635
Ogura Y, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-12, P3976, DOI 10.1109/IROS.2006.281834
POZZO T, 1990, EXP BRAIN RES, V82, P97
Raibert M.H., 1986, LEGGED ROBOTS BALANC
Rose J., 2005, HUMAN WALKING
Schache AG, 2002, HUM MOVEMENT SCI, V21, P273, DOI 10.1016/S0167-9457(02)00080-5
Tajima R., 2009, P IEEE INT C ROB AUT, P1571
Takenaka T., 2011, JRSJ, V29, P93
Wensing PM, 2013, IEEE INT C INT ROBOT, P5134, DOI 10.1109/IROS.2013.6697099
World Medical Association, 1964, DECL HELS
Zhao Y, 2012, IEEE-RAS INT C HUMAN, P726, DOI 10.1109/HUMANOIDS.2012.6651600
NR 27
TC 0
Z9 0
U1 0
U2 0
PU FRONTIERS MEDIA SA
PI LAUSANNE
PA PO BOX 110, EPFL INNOVATION PARK, BUILDING I, LAUSANNE, 1015,
SWITZERLAND
SN 2296-9144
J9 FRONT ROBOT AI
JI Front. Robot. AI
PY 2015
AR 17
DI 10.3389/frobt.2015.00017
PG 9
WC Robotics
SC Robotics
GA V7W7Z
UT WOS:000421362200010
OA gold
DA 2018-01-22
ER

PT J
AU Solis, J
Ozawa, K
Takeuchi, M
Kusano, T
Ishikawa, S
Petersen, K
Takanishi, A
AF Solis, Jorge
Ozawa, Kenichiro
Takeuchi, Maasaki
Kusano, Takafumi
Ishikawa, Shimpei
Petersen, Klaus
Takanishi, Atsuo
TI Biologically-inspired Control Architecture for Musical Performance
Robots
SO INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
LA English
DT Article
DE Biologically-inspired Robotics; Feedback Error Learning Control; Music
ID ANTHROPOMORPHIC FLUTIST; IMPLEMENTATION; SYSTEM; MODEL
AB At Waseda University, since 1990, the authors have been developing
anthropomorphic musical performance robots as a means for understanding human
control, introducing novel ways of interaction between musical partners and robots,
and proposing applications for humanoid robots. In this paper, the design of a
biologically-inspired control architecture for both an anthropomorphic flutist
robot and a saxophone playing robot are described. As for the flutist robot, the
authors have focused on implementing an auditory feedback system to improve the
calibration procedure for the robot in order to play all the notes correctly during
a performance. In particular, the proposed auditory feedback system is composed of
three main modules: an Expressive Music Generator, a Feed Forward Air Pressure
Control System and a Pitch Evaluation System. As for the saxophone-playing robot, a
pressure-pitch controller (based on the feedback error learning) to improve the
sound produced by the robot during a musical performance was proposed and
implemented. In both cases studied, a set of experiments are described to verify
the improvements achieved while considering biologically-inspired control
approaches.
C1 [Solis, Jorge] Karlstad Univ, Dept Engn & Phys, Karlstad, Sweden.
[Solis, Jorge; Petersen, Klaus] Waseda Univ, Res Inst Sci & Engn, Tokyo, Japan.
[Solis, Jorge; Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst, Tokyo, Japan.
[Ozawa, Kenichiro; Takeuchi, Maasaki; Kusano, Takafumi; Ishikawa, Shimpei]
Waseda Univ, Grad Sch Adv Sci & Engn, Tokyo, Japan.
[Takanishi, Atsuo] Waseda Univ, Dept Mech Engn, Tokyo, Japan.
RP Solis, J (reprint author), Karlstad Univ, Dept Engn & Phys, Karlstad, Sweden.
EM solis@ieee.org
FU Japanese Ministry of Education, Culture, Sports, Science and Technology
[23700238]
FX Part of the research on Musical Instrument-Playing Robot was done at the
Humanoid Robotics Institute (HRI), Waseda University and at the Center
for Advanced Biomedical Sciences (TWINs). This study is supported (in
part) by a Grant-in-Aid for Young Scientists (B) provided by the
Japanese Ministry of Education, Culture, Sports, Science and Technology,
No. 23700238 (Jorge Solis, PI).
CR Ando Y., 1970, J ACOUSTICAL SOC JAP, P297
Ando Y., 1970, J ACOUST SOC JAPAN, V26, P297
Asfour T., 2000, P INT C HUM ROB
Bishop CM, 2004, NEURAL NETWORKS PATT
Brisan C, 2006, 2006 IEEE-TTTC INTERNATIONAL CONFERENCE ON AUTOMATION, QUALITY
AND TESTING, ROBOTICS, VOL 2, PROCEEDINGS, P272
Carbone G, 2007, P I MECH ENG C-J MEC, V221, P829, DOI 10.1243/0954406JMES367
Chen-Gang C., 2014, INT J PRECIS ENG MAN, V15, P1759
Dannenberg R. B., 2005, P 2005 INT C NEW INT, P80
Degallier S., 2006, P IEEE RAS INT C HUM, P512
Doyon Andre, 1966, J VAUCANSON MECANICI
Fletcher N., 2001, P INT S MUS AC PER, P87
Guillemain P, 2010, ACTA ACUST UNITED AC, V96, P622, DOI 10.3813/AAA.918317
ISODA S, 2003, P IEEE INT C ROB AUT, P3582
Karayiannidis Y., 2012, P IEEE RSJ INT C INT, P4040
KATAYOSE H, 1993, COMPUT HUMANITIES, V27, P31, DOI 10.1007/BF01830715
Kato I., 1987, Robotics, V3, P143, DOI 10.1016/0167-8493(87)90002-7
KAWATO M, 1987, BIOL CYBERN, V57, P169, DOI 10.1007/BF00364149
KAWATO M, 1992, BIOL CYBERN, V68, P95, DOI 10.1007/BF00201431
Kim HS, 2003, IEEE T CONTR SYST T, V11, P279, DOI 10.1109/TCST.2003.809251
Klaedefabrik K.B., 2005, M RICHES MASKINERNE, P10
Mukai S., 1992, P INT S MUS AC, P239
Patel R., 2014, P 14 MECH FOR INT C, P427
Petersen K., 2010, AUTONOMOUS ROBOTS J, V28, P439
Ruther M, 2010, I C CONT AUTOMAT ROB, P169, DOI 10.1109/ICARCV.2010.5707268
Solis J., 2008, MECH MACH THEORY, V44, P527
Solis J., 2011, P INT C ROB AUT, P3976
Solis J., 2013, IEEE WORKSH ADV ROB, P200
Solis J., 2014, P 14 MECH FOR INT C, P377
Solis J., 2012, P 2 IFTOMM AS C MECH
Solis J, 2006, INT J HUM ROBOT, V3, P127, DOI 10.1142/S0219843606000709
Solis J, 2010, IEEE INT C INT ROBOT, P1943, DOI 10.1109/IROS.2010.5649257
Solis J, 2009, ADV ROBOTICS, V23, P1849, DOI [10.1163/016918609X12518783330207,
10.1163/016918609\12518783330207]
Spiers A., 2010, AM CONTR C
Takashima S., 2006, P IEEE RSJ INT C INT, P30
Vaucanson J. de., 1979, FLUTE LIB, V5
WANG DS, 2005, P INT C MACH LEARN C, V7, P4064
Yao B, 2001, AUTOMATICA, V37, P1305, DOI 10.1016/S0005-1098(01)00082-6
Zhuang H., 2008, P IEEE INT C ROB AUT, V2, P981
NR 38
TC 0
Z9 0
U1 1
U2 13
PU INTECH EUROPE
PI RIJEKA
PA JANEZA TRDINE 9, RIJEKA, 51000, CROATIA
SN 1729-8806
EI 1729-8814
J9 INT J ADV ROBOT SYST
JI Int. J. Adv. Robot. Syst.
PD OCT 22
PY 2014
VL 11
AR 11
DI 10.5772/59232
PG 11
WC Robotics
SC Robotics
GA AR9MW
UT WOS:000343900300001
OA gold
DA 2018-01-22
ER

PT J
AU Chau, TD
Li, JF
Akagi, M
AF Thanh-Duc Chau
Li, Junfeng
Akagi, Masato
TI Binaural Sound Source Localization in Noisy Reverberant Environments
Based on Equalization-Cancellation Theory
SO IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND
COMPUTER SCIENCES
LA English
DT Article
DE binaural sound localization; equalization-cancellation model; noisy
reverberant environments; humanoid robot
ID LEVEL DIFFERENCES; DISTANCE; DELAY; ROOMS; TIME
AB Sound source localization (SSL), with a binaural input in practical
environments, is a challenging task due to the effects of noise and reverberation.
In psychoacoustic research field, one of the theories to explain the mechanism of
human perception in such environments is the well-known equalization-cancellation
(EC) model. Motivated by the EC theory, this paper investigates a binaural SSL
method by integrating EC procedures into a beamforming technique. The principle
idea is that the EC procedures are first utilized to eliminate the sound signal
component at each candidate direction respectively; direction of sound source is
then determined as the direction at which the residual energy is minimal. The EC
procedures applied in the proposed method differ from those in traditional EC
models, in which the interference signals in rooms are accounted in E and C
operations based on limited prior known information. Experimental results
demonstrate that our proposed method outperforms the traditional SSL algorithms in
the presence of noise and reverberation simultaneously.
C1 [Thanh-Duc Chau; Akagi, Masato] Japan Adv Inst Sci & Technol JAIST, Nomi
9231292, Japan.
[Li, Junfeng] Chinese Acad Sci, Inst Acoust, Beijing 100864, Peoples R China.
RP Chau, TD (reprint author), Japan Adv Inst Sci & Technol JAIST, Nomi 9231292,
Japan.
EM duc.chau@jaist.ac.jp; lijunfeng@hccl.ioa.ac.cn; akagi@jaist.ac.jp
FU A3 Foresight Program made available by the Japan Society for the
Promotion of Science (JSPS); Ministry of Internal Affairs and
Communications (MIC), Japan [131205001]
FX This study was supported by:; The A3 Foresight Program made available by
the Japan Society for the Promotion of Science (JSPS).; The Strategic
Information and Communications R & D Promotion Programme (SCOPE;
131205001) of the Ministry of Internal Affairs and Communications (MIC),
Japan.
CR ALLEN JB, 1979, J ACOUST SOC AM, V65, P943, DOI 10.1121/1.382599
Andersson SB, 2004, IEEE INT CONF ROBOT, P4833, DOI 10.1109/ROBOT.2004.1302483
Berglund E, 2008, AUTON ROBOT, V24, P401, DOI 10.1007/s10514-008-9084-9
Bronkhorst AW, 1999, NATURE, V397, P517, DOI 10.1038/17374
Calmes L., 2009, THESIS AWTH AACHEN U
Campbell D. R., 2004, ROOMSIM USER GUIDE V
Chau D.T., 2010, INTERSPEECH, P2770
CHERRY EC, 1953, J ACOUST SOC AM, V25, P975, DOI 10.1121/1.1907229
Colbum H.S., 2005, SOUND SOURCE LOCALIZ, P276
CULLING JF, 1995, J ACOUST SOC AM, V98, P785, DOI 10.1121/1.413571
Culling JF, 2004, J ACOUST SOC AM, V116, P1057, DOI 10.1121/17.1772396
DiBiase JH, 2001, DIGITAL SIGNAL PROC, P157
DiBiase J.H., 2000, THESIS BROWN U USA
Durlach N, 2005, SPEECH SEPARATION BY HUMANS AND MACHINES, P221, DOI 10.1007/0-
387-22794-6_15
DURLACH NI, 1963, J ACOUST SOC AM, V35, P1206, DOI 10.1121/1.1918675
Durlach N. I., 1972, F MODERN AUDITORY TH, VII, P371
GARDNER WG, 1995, J ACOUST SOC AM, V97, P3907, DOI 10.1121/1.412407
JEFFRESS LA, 1948, J COMP PHYSIOL PSYCH, V41, P35, DOI 10.1037/h0061495
Keyrouz F, 2006, INT CONF ACOUST SPEE, P341
KNAPP CH, 1976, IEEE T ACOUST SPEECH, V24, P320, DOI 10.1109/TASSP.1976.1162830
KUREMATSU A, 1990, SPEECH COMMUN, V9, P357, DOI 10.1016/0167-6393(90)90011-W
LI D, 2002, P 2002 IEEE INT C RO, P19
Li JF, 2011, SPEECH COMMUN, V53, P677, DOI 10.1016/j.specom.2010.04.009
Lu YC, 2010, IEEE T AUDIO SPEECH, V18, P1793, DOI 10.1109/TASL.2010.2050687
Lv XL, 2008, PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND
INFORMATION TECHNOLOGY, P942, DOI 10.1109/ICCSIT.2008.26
MacDonald JA, 2008, J ACOUST SOC AM, V123, P4290, DOI 10.1121/1.2909566
Murray J.C., 2009, NEURAL NETWORKS, V22, P172
Raspaud M, 2010, IEEE T AUDIO SPEECH, V18, P68, DOI 10.1109/TASL.2009.2023644
Roman N, 2006, J ACOUST SOC AM, V120, P4040, DOI 10.1121/1.2355480
Shinn-Cunningham B.G., 2000, INT C AUD DISPL, P126
Stern R. M., 2006, COMPUTATIONAL AUDITO, P147
Wan R, 2010, J ACOUST SOC AM, V128, P3678, DOI 10.1121/1.3502458
Woodruff J, 2012, IEEE T AUDIO SPEECH, V20, P1503, DOI 10.1109/TASL.2012.2183869
Zhang C, 2008, INT CONF ACOUST SPEE, P2565
NR 34
TC 0
Z9 0
U1 0
U2 15
PU IEICE-INST ELECTRONICS INFORMATION COMMUNICATIONS ENG
PI TOKYO
PA KIKAI-SHINKO-KAIKAN BLDG, 3-5-8, SHIBA-KOEN, MINATO-KU, TOKYO, 105-0011,
JAPAN
SN 1745-1337
J9 IEICE T FUND ELECTR
JI IEICE Trans. Fundam. Electron. Commun. Comput. Sci.
PD OCT
PY 2014
VL E97A
IS 10
BP 2011
EP 2020
DI 10.1587/transfun.E97.A.2011
PG 10
WC Computer Science, Hardware & Architecture; Computer Science, Information
Systems; Engineering, Electrical & Electronic
SC Computer Science; Engineering
GA AQ4QU
UT WOS:000342784000001
DA 2018-01-22
ER

PT J
AU Ugurlu, B
Saglia, JA
Tsagarakis, NG
Morfey, S
Caldwell, DG
AF Ugurlu, Barkan
Saglia, Jody A.
Tsagarakis, Nikos G.
Morfey, Stephen
Caldwell, Darwin G.
TI Bipedal Hopping Pattern Generation for Passively Compliant Humanoids:
Exploiting the Resonance
SO IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
LA English
DT Article
DE Humanoid; legged hopping; passive compliance; resonance frequency;
trajectory generation
ID LEGGED LOCOMOTION; WALKING; ROBOT; ACTUATORS; DESIGN; MOTION; FORCE;
MODEL
AB This paper describes a novel technique to analytically generate feasible hopping
trajectories that can be applied to passively compliant bipedal humanoid robots.
The proposed method is based on exploiting the inherent passive compliance through
a reduced model, while the dynamic balance is maintained via the zero moment point
(ZMP) criterion. To begin with, a computational resonance frequency analysis and a
system identification routine are performed for the actual robot so as to determine
the base resonance frequency, which is of special interest. Subsequently, the
vertical component of the center of mass trajectory is generated by using a
periodic function in which the aforementioned resonance frequency is utilized. The
horizontal component of the center of mass trajectory is generated via the ZMP
concept to ensure the dynamic balance. Having analytically generated both vertical
and horizontal elements of the center of mass trajectory, joint motions are
computed through the utilization of translational and angular momentum constraints.
Applying the proposed method, a series of periodic forward hopping experiments is
conducted on our actual compliant bipedal robot. As a result, we observe
repetitive, continuous, and dynamically equilibrated forward hopping cycles with
successful landing phases.
C1 [Ugurlu, Barkan] Adv Telecommun Res Inst Int ATR, Dept Brain Robot Interface,
Computat Neurosci Labs, Kyoto 6190288, Japan.
[Saglia, Jody A.; Tsagarakis, Nikos G.; Caldwell, Darwin G.] Ist Italiano
Tecnol, Dept Adv Robot, I-16163 Genoa, Italy.
[Morfey, Stephen] SRI Int, Menlo Pk, CA 94025 USA.
RP Ugurlu, B (reprint author), Adv Telecommun Res Inst Int ATR, Dept Brain Robot
Interface, Computat Neurosci Labs, Kyoto 6190288, Japan.
EM barkanu@ieee.org; jody.saglia@iit.it; nikos.tsagarakis@iit.it;
stephen.morfey@sri.com; darwin.caldwell@iit.it
RI Ugurlu, Barkan/A-4793-2015
OI Ugurlu, Barkan/0000-0002-9124-7441
FU European Commission [ICT-248311]
FX Manuscript received October 25, 2012; revised September 20, 2013;
accepted November 30, 2013. Date of publication January 14, 2014; date
of current version May 2, 2014. This work was supported by the European
Commission FP7, "AMARSi" project ICT-248311.
CR Adhikari S., 2000, THESIS CAMBRIDGE U C
Ahmadi M, 2006, IEEE T ROBOT, V22, P974, DOI 10.1109/TRO.2006.878935
BLICKHAN R, 1989, J BIOMECH, V22, P1217, DOI 10.1016/0021-9290(89)90224-8
Buchli J, 2008, AUTON ROBOT, V25, P331, DOI 10.1007/s10514-008-9099-2
Ghorbel F, 1997, J DYN SYST-T ASME, V119, P110, DOI 10.1115/1.2801200
Hansen AH, 2004, J BIOMECH, V37, P1467, DOI 10.1016/j.jbiomech.2004.01.017
Hong YD, 2014, IEEE T IND ELECTRON, V61, P2346, DOI 10.1109/TIE.2013.2267691
Hosoda K, 2008, ROBOT AUTON SYST, V56, P46, DOI 10.1016/j.robot.2007.09.010
Hyon SH, 2009, IEEE T ROBOT, V25, P171, DOI 10.1109/TRO.2008.2006870
Ishikawa M, 2005, J APPL PHYSIOL, V99, P603, DOI 10.1152/japplphysiol.00189.2005
Jin J, 2013, IEEE T IND ELECTRON, V60, P3796, DOI 10.1109/TIE.2012.2205349
Laffranchi M, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P5678, DOI 10.1109/IROS.2009.5354349
Moro FL, 2014, AUTON ROBOT, V36, P331, DOI 10.1007/s10514-013-9357-9
MURAKAMI T, 1993, IEEE T IND ELECTRON, V40, P259, DOI 10.1109/41.222648
Ott C., 2011, P IEEE RAS INT C HUM, P26
RAIBERT MH, 1984, INT J ROBOT RES, V3, P2, DOI 10.1177/027836498400300201
Roberts TJ, 1997, SCIENCE, V275, P1113, DOI 10.1126/science.275.5303.1113
Seyfarth A, 2002, J BIOMECH, V35, P649, DOI 10.1016/S0021-9290(01)00245-7
Shimmyo S, 2013, IEEE T IND ELECTRON, V60, P5137, DOI 10.1109/TIE.2012.2221111
Soltoggio A, 2012, KUNSTL INTELL, V26, P407, DOI 10.1007/s13218-012-0192-5
SPONG MW, 1987, J DYN SYST-T ASME, V109, P310, DOI 10.1115/1.3143860
SUGIHARA T, 2007, P IEEE C INT ROB SYS, P444
Tsagarakis N. G., 2009, P IEEE INT C ROB AUT, P4356
Ugurlu B., 2011, Proceedings of the 2011 IEEE International Conference on
Mechatronics (ICM), P833, DOI 10.1109/ICMECH.2011.5971230
Ugurlu B, 2014, IEEE T IND ELECTRON, V61, P1632, DOI 10.1109/TIE.2013.2259776
Ugurlu B, 2012, IEEE INT C INT ROBOT, P1846, DOI 10.1109/IROS.2012.6385819
Ugurlu B, 2012, IEEE INT CONF ROBOT, P1436, DOI 10.1109/ICRA.2012.6224909
Ugurlu B, 2010, IEEE T IND ELECTRON, V57, P1701, DOI 10.1109/TIE.2009.2032439
Van Ham R, 2009, IEEE ROBOT AUTOM MAG, V16, P81, DOI 10.1109/MRA.2009.933629
Vanderborght B, 2008, ADV ROBOTICS, V22, P1027, DOI 10.1163/156855308X324749
XU F, 2008, P JPN I EL ENG TECH, P7
Yu XX, 2014, IEEE T IND ELECTRON, V61, P1053, DOI 10.1109/TIE.2013.2266080
Zinn M, 2004, IEEE ROBOT AUTOM MAG, V11, P12, DOI 10.1109/MRA.2004.1310938
NR 33
TC 7
Z9 7
U1 0
U2 23
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 0278-0046
EI 1557-9948
J9 IEEE T IND ELECTRON
JI IEEE Trans. Ind. Electron.
PD OCT
PY 2014
VL 61
IS 10
BP 5431
EP 5443
DI 10.1109/TIE.2014.2300060
PG 13
WC Automation & Control Systems; Engineering, Electrical & Electronic;
Instruments & Instrumentation
SC Automation & Control Systems; Engineering; Instruments & Instrumentation
GA AH5YT
UT WOS:000336208200030
DA 2018-01-22
ER

PT J
AU Omer, A
Hashimoto, K
Lim, HO
Takanishi, A
AF Omer, Aiman
Hashimoto, Kenji
Lim, Hun-ok
Takanishi, Atsuo
TI Study of Bipedal Robot Walking Motion in Low Gravity: Investigation and
Analysis Regular Paper
SO INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
LA English
DT Article
DE Bipedal Locomotion; Planetary Exploration; Froude Number; Humanoid Robot
ID LOCOMOTION
AB Humanoid robots are expected to play a major role in the future of space and
planetary exploration. Humanoid robot features could have many advantages, such as
interacting with astronauts and the ability to perform human tasks. However, the
challenge of developing such a robot is quite high due to many difficulties. One of
the main difficulties is the difference in gravity. Most researchers in the field
of bipedal locomotion have not paid much attention to the effect of gravity.
Gravity is an important parameter in generating a bipedal locomotion trajectory.
This research investigates the effect of gravity on bipedal walking motion. It
focuses on low gravity, since most of the known planets and moons have lower
gravity than earth. Further study is conducted on a full humanoid robot model
walking subject to the moon's gravity, and an approach for dealing with moon
gravity is proposed in this paper.
C1 [Omer, Aiman] Waseda Univ, Sch Creat Sci & Engn, Tokyo, Japan.
[Hashimoto, Kenji] Waseda Univ, Res Inst Sci & Engn, Tokyo, Japan.
[Lim, Hun-ok] Kanagawa Univ, Dept Mech Engn, Yokohama, Kanagawa, Japan.
[Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst, Dept Mech Engn, Tokyo,
Japan.
RP Omer, A (reprint author), Waseda Univ, Sch Creat Sci & Engn, Tokyo, Japan.
EM aimano@aoni.waseda.jp
RI Hashimoto, Kenji/M-6442-2017
OI Hashimoto, Kenji/0000-0003-2300-6766
FU JSPS KAKENHI [24360099, 25220005]; SolidWorks Japan K.K.
FX This study was conducted as part of the Research Institute for Science
and Engineering, Waseda University, and as part of the humanoid project
at the Humanoid Robotics Institute, Waseda University. It was also
supported in part by JSPS KAKENHI (Grant Number: 24360099 and 25220005)
and SolidWorks Japan K.K., whom we thank for their financial and
technical support. The High-performance Physical Modelling and
Simulation software MapleSim used in this research was provided by
Cybernet Systems Co., Ltd. (Vendor: Waterloo Maple Inc.).
CR Alexander R. M., 1996, MATH GAZ, V80, P262
de Santos P., 2006, QUADRUPEDAL LOCOMOTI
Diftler MA, 2005, IEEE INT CONF ROBOT, P1425
Duindam V, 2009, SPRINGER TRAC ADV RO, V53, P1
Geyer H, 2005, J THEOR BIOL, V232, P315, DOI 10.1016/j.jtbi.2004.08.015
Kajita S., HDB ROBOTICS B, P361
MCGEER T, 1990, INT J ROBOT RES, V9, P62, DOI 10.1177/027836499000900206
Mehling JS, 2007, IEEE INT CONF ROBOT, P2928, DOI 10.1109/ROBOT.2007.363916
Minetti AE, 2001, NATURE, V409, P467, DOI 10.1038/35054166
Aiman Musa M., 2010, 18 CISM IFTOMM S ROB, P241
Aiman Musa M, 2008, IEEE INT C ROB BIOM, P137
NIXON MS, 2006, HUMAN IDENTIFICATION
Oda M., 2008, P INT S ART INT ROB
Omer A., 2011, 2011 IEEE/SICE International Symposium on System Integration (SII
2011), P802, DOI 10.1109/SII.2011.6147551
Omer A., 2005, IEEE RAS INT C HUM R, P50
Saibene F, 2003, EUR J APPL PHYSIOL, V88, P297, DOI 10.1007/s00421-002-0654-9
Tunstel E, 2009, SPRINGER HANDBOOK OF AUTOMATION, P1241, DOI 10.1007/978-3-540-
78831-7_69
Ulivi P., 2007, ROBOTIC EXPLORATIO 1
Vaughan CL, 2005, GAIT POSTURE, V21, P350, DOI 10.1016/j.gaitpost.2004.01.011
Yoshida K., 2008, HDB ROBOTICS F, P1031
NR 20
TC 0
Z9 0
U1 0
U2 4
PU INTECH EUROPE
PI RIJEKA
PA JANEZA TRDINE 9, RIJEKA, 51000, CROATIA
SN 1729-8806
EI 1729-8814
J9 INT J ADV ROBOT SYST
JI Int. J. Adv. Robot. Syst.
PD SEP 2
PY 2014
VL 11
AR 139
DI 10.5772/58731
PG 14
WC Robotics
SC Robotics
GA AO5GX
UT WOS:000341373100001
OA gold
DA 2018-01-22
ER

PT J
AU Sorbello, R
Chella, A
Cali, C
Giardina, M
Nishio, S
Ishiguro, H
AF Sorbello, Rosario
Chella, Antonio
Cali, Carmelo
Giardina, Marcello
Nishio, Shuichi
Ishiguro, Hiroshi
TI Telenoid android robot as an embodied perceptual social regulation
medium engaging natural human-humanoid interaction
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Telenoid; Geminoid; Social robot; Human-humanoid robot interaction
ID SCIENCE
AB The present paper aims to validate our research on human-humanoid interaction
(HHI) using the minimalist humanoid robot Telenoid. We conducted the human-robot
interaction test with 142 young people who had no prior interaction experience with
this robot. The main goal is the analysis of the two social dimensions
("Perception" and "Believability") useful for increasing the natural behaviour
between users and Telenoid. We administered our custom questionnaire to human
subjects in association with a well defined experimental setting ("ordinary and
goal-guided task"). A thorough analysis of the questionnaires has been carried out
and reliability and internal consistency in correlation between the multiple items
has been calculated. Our experimental results show that the perceptual behaviour
and believability, as implicit social competences, could improve the meaningfulness
and the natural-like sense of human-humanoid interaction in everyday life task-
driven activities. Telenoid is perceived, as an autonomous cooperative agent for a
shared environment by human beings. (C) 2014 Elsevier B.V. All rights reserved.
C1 [Sorbello, Rosario; Chella, Antonio; Giardina, Marcello] Univ Palermo, Robot
Lab, Dept DICGIM, Palermo, Italy.
[Cali, Carmelo] Univ Palermo, Dept Sci Umanist, Palermo, Italy.
[Nishio, Shuichi; Ishiguro, Hiroshi] ATR, Hiroshi Ishiguro Lab, Keihanna Sci
City, Kyoto, Japan.
[Ishiguro, Hiroshi] Osaka Univ, Grad Sch Engn Sci, Dept Syst Innovat, Toyonaka,
Osaka 560, Japan.
RP Sorbello, R (reprint author), Univ Palermo, Robot Lab, Dept DICGIM, Palermo,
Italy.
EM rosario.sorbello@unipa.it
RI Sorbello, Rosario/N-1622-2016
OI Sorbello, Rosario/0000-0002-9906-7074; Chella,
Antonio/0000-0002-8625-708X
CR ARGYLE M, 1965, SOCIOMETRY, V28, P289, DOI 10.2307/2786027
Argyle M, 1973, SEMIOTICA, V7, P19, DOI [10.1515/semi.1973.7.1.19, DOI
10.1515/SEMI.1973.7.1.19]
Balistreri G, 2011, FRONT ARTIF INTEL AP, V233, P26
Balistreri S., 2011, LECT NOTES COMPUTER, V6934, P432
Breazeal C, 2003, ROBOT AUTON SYST, V42, P167, DOI 10.1016/S0921-8890(02)00373-1
Call C., 2007, GESTALT THEORY INT M, V29, P168
Chella A, 2011, FRONT ARTIF INTEL AP, V233, P453, DOI 10.3233/978-1-60750-959-2-
453
CRONBACH LJ, 1951, PSYCHOMETRIKA, V16, P297
Dautenhahn K, 1998, APPL ARTIF INTELL, V12, P573, DOI 10.1080/088395198117550
Dautenhahn K, 2007, PHILOS T R SOC B, V362, P679, DOI 10.1098/rstb.2006.2004
Durlach N, 2000, PRESENCE-TELEOP VIRT, V9, P214, DOI 10.1162/105474600566736
Gallese V, 1998, TRENDS COGN SCI, V2, P493, DOI 10.1016/S1364-6613(98)01262-5
Gallese V., 2005, PHENOMENOLOGY COGNIT, V4, P23, DOI DOI 10.1007/S11097-005-
4737-Z
Goffman Erving, 1966, BEHAV PUBLIC PLACES
Graziano MSA, 1998, CURR OPIN NEUROBIOL, V8, P195, DOI 10.1016/S0959-
4388(98)80140-2
Ishiguro H., 2012, 12 INT C INT AUT SYS
Ishiguro H, 2007, SPR TRA ADV ROBOT, V28, P118
Kanda T, 2002, IEEE INT C ROB AUT I, V2, P1848, DOI 10.1109/ROBOT.2002.1014810
Kanda T, 2008, IEEE T ROBOT, V24, P725, DOI 10.1109/TRO.2008.921566
Metta G., 2008, P 8 WORKSH PERF METR, P50, DOI DOI 10.1145/1774674.1774683
Nunnally J. C., 2010, PSYCHOMETRIC THEOR E
Oztop E., 2005, INT J HUM ROBOT, V2, P537, DOI DOI 10.1142/S0219843605000582
Poel M, 2009, AI SOC, V24, P61, DOI 10.1007/s00146-009-0198-1
Shimada M, 2006, IEEE-RAS INT C HUMAN, P157, DOI 10.1109/ICHR.2006.321378
Slater M, 2000, PRESENCE-TELEOP VIRT, V9, P37, DOI 10.1162/105474600566600
Walters M. L., 2009, P NEW FRONT HUM ROB
Zhao SY, 2003, PRESENCE-TELEOP VIRT, V12, P445, DOI 10.1162/105474603322761261
NR 27
TC 4
Z9 4
U1 0
U2 17
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD SEP
PY 2014
VL 62
IS 9
SI SI
BP 1329
EP 1341
DI 10.1016/j.robot.2014.03.017
PG 13
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA AN1HX
UT WOS:000340334400010
DA 2018-01-22
ER

PT J
AU Mohamed, Z
Kitani, M
Kaneko, S
Capi, G
AF Mohamed, Zulkifli
Kitani, Mitsuki
Kaneko, Shin-ichiro
Capi, Genci
TI Humanoid robot arm performance optimization using multi objective
evolutionary algorithm
SO INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS
LA English
DT Article
DE Genetic algorithm; mobile humanoid robot; multi-objective evolutionary
algorithm; robot arm motion generation
ID MULTIOBJECTIVE GENETIC ALGORITHM; NEURAL CONTROLLERS; MOTION
AB As humanoid robots are expected to operate in human environments they are
expected to perform a wide range of tasks. Therefore, the robot arm motion must be
generated based on the specific task. In this paper we propose an optimal arm
motion generation satisfying multiple criteria. In our method, we evolved neural
controllers that generate the humanoid robot arm motion satisfying three different
criteria; minimum time, minimum distance and minimum acceleration. The robot hand
is required to move from the initial to the final goal position. In order to
compare the performance, single objective GA is also considered as an optimization
tool. Selected neural controllers from the Pareto solution are implemented and
their performance is evaluated. Experimental investigation shows that the evolved
neural controllers performed well in the real hardware of the mobile humanoid robot
platform.
C1 [Mohamed, Zulkifli; Kitani, Mitsuki; Capi, Genci] Toyama Univ, Fac Engn, Dept
Elect & Elect Syst Engn, Toyama 9308555, Japan.
[Mohamed, Zulkifli] Univ Teknol MARA, Fac Mech Engn, Shah Alam 40450, Selangor,
Malaysia.
[Kaneko, Shin-ichiro] Toyama Natl Coll Technol, Dept Elect & Control Syst Engn,
Toyama, Japan.
RP Mohamed, Z (reprint author), Toyama Univ, Fac Engn, Dept Elect & Elect Syst
Engn, 3190 Gofuku, Toyama 9308555, Japan.
EM ilfikluzz@gmail.com; kitani@eng.u-toyama.ac.jp; skaneko@nc-toyama.ac.jp;
capi@eng.u-toyama.ac.jp
CR Belding T. C., 1995, P 6 INT C GEN ALG, P114
CANTUPAZ E, 1999, P GEN EV COMP C GECC, P91
Capi G, 2005, ROBOT AUTON SYST, V52, P148, DOI 10.1016/j.robot.2005.04.003
Capi G, 2007, IEEE T ROBOT, V23, P1225, DOI 10.1109/TRO.2007.910773
Chen MW, 1997, J ROBOTIC SYST, V14, P529, DOI 10.1002/(SICI)1097-
4563(199707)14:7<529::AID-ROB2>3.0.CO;2-P
Coello C. C., 2002, EVOLUTIONARY ALGORIT
Deb K, 2002, IEEE T EVOLUT COMPUT, V6, P182, DOI 10.1109/4235.996017
Dias AHF, 2002, IEEE T MAGN, V38, P1133, DOI 10.1109/20.996290
HOLLAND J.H, 1975, ADAPTATION NATURAL A
Kubota N, 1997, IEEE INT CONF ROBOT, P205, DOI 10.1109/ROBOT.1997.620039
Liu Y., 2011, P INT FOR STRAT TECH, V1, P410
Mohameda Z., 2012, PROCEDIA ENG, V41, P345
Montana DJ, 1989, P 11 INT JOINT C ART, V1, P762
Pires EJS, 2007, APPL SOFT COMPUT, V7, P659, DOI 10.1016/j.asoc.2005.06.009
Pires EJS, 2004, LECT NOTES COMPUT SC, V3102, P615
Truong QB, 2013, INT J CONTROL AUTOM, V11, P834, DOI 10.1007/s12555-011-0055-0
Ramabalan S, 2009, INT J ADV MANUF TECH, V41, P580, DOI 10.1007/s00170-008-1506-
5
Rana A., 1996, P UKACC INT C CONTR, V1, P29
Rehman R., 2010, MECH MACH THEORY, V45, P1125
Srivinas N., 1995, EVOL COMPUT, V2, P279
Van M., 2013, INT J CONTROL AUTOM, V11, P834
Wang Q, 1996, IEEE INT CONF ROBOT, P2592, DOI 10.1109/ROBOT.1996.506553
NR 22
TC 3
Z9 3
U1 1
U2 12
PU INST CONTROL ROBOTICS & SYSTEMS, KOREAN INST ELECTRICAL ENGINEERS
PI BUCHEON
PA BUCHEON TECHNO PARK 401-1506, 193 YAKDAE-DONG WONMI-GU, BUCHEON,
GYEONGGI-DO 420-734, SOUTH KOREA
SN 1598-6446
EI 2005-4092
J9 INT J CONTROL AUTOM
JI Int. J. Control Autom. Syst.
PD AUG
PY 2014
VL 12
IS 4
BP 870
EP 877
DI 10.1007/s12555-013-0275-6
PG 8
WC Automation & Control Systems
SC Automation & Control Systems
GA AL4LV
UT WOS:000339105400018
DA 2018-01-22
ER

PT J
AU Tidoni, E
Gergondet, P
Kheddar, A
Aglioti, SM
AF Tidoni, Emmanuele
Gergondet, Pierre
Kheddar, Abderrahmane
Aglioti, Salvatore M.
TI Audio-visual feedback improves the BCI performance in the navigational
control of a humanoid robot
SO FRONTIERS IN NEUROROBOTICS
LA English
DT Article
DE brain computer interface; SSVEPs; sense of agency; humanoid;
teleoperation; motor control
ID BRAIN-COMPUTER INTERFACE; PEOPLE; AGENCY; CONSEQUENCES; SENSE
AB Advancement in brain computer interfaces (BCI) technology allows people to
actively interact in the world through surrogates. Controlling real humanoid robots
using BCI as intuitively as we control our body represents a challenge for current
research in robotics and neuroscience. In order to successfully interact with the
environment the brain integrates multiple sensory cues to form a coherent
representation of the world. Cognitive neuroscience studies demonstrate that
multisensory integration may imply a gain with respect to a single modality and
ultimately improve the overall sensorimotor performance. For example, reactivity to
simultaneous visual and auditory stimuli may be higher than to the sum of the same
stimuli delivered in isolation or in temporal sequence. Yet, knowledge about
whether audio-visual integration may improve the control of a surrogate is meager.
To explore this issue, we provided human footstep sounds as audio feedback to BCI
users while controlling a humanoid robot. Participants were asked to steer their
robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that
audio-visual synchrony between footsteps sound and actual humanoid's walk reduces
the time required for steering the robot. Thus, auditory feedback congruent with
the humanoid actions may improve motor decisions of the BCI's user and help in the
feeling of control over it. Our results shed light on the possibility to increase
robot's control through the combination of multisensory feedback to a BCI user.
C1 [Tidoni, Emmanuele; Aglioti, Salvatore M.] Univ Roma La Sapienza, Dept Psychol,
I-00185 Rome, Italy.
[Tidoni, Emmanuele; Aglioti, Salvatore M.] Fdn Santa Lucia, IRCCS, Rome, Italy.
[Gergondet, Pierre; Kheddar, Abderrahmane] UMI3218 CRT, CNRS AIST Joint Robot
Lab, Tsukuba, Ibaraki, Japan.
[Gergondet, Pierre; Kheddar, Abderrahmane] UM2 CNRS LIRMM UMR5506, Montpellier,
France.
RP Tidoni, E (reprint author), Univ Roma La Sapienza, Dept Psychol, Via dei Marsi
78, I-00185 Rome, Italy.
EM emmanuele.tidoni@uniroma1.it
RI tidoni, emmanuele/I-3189-2014
OI tidoni, emmanuele/0000-0001-9079-2862
FU EU Information and Communication Technologies Grant (VERB project,
FP7-ICT-5) [257695]; Italian Ministry of Health [RF-2010-2312912]
FX Financial support from the EU Information and Communication Technologies
Grant (VERB project, FP7-ICT-2009-5, Prot. Num. 257695), the Italian
Ministry of Health (and RF-2010-2312912).
CR Bell CJ, 2008, J NEURAL ENG, V5, P214, DOI 10.1088/1741-2560/5/2/012
Bresciani JP, 2005, EXP BRAIN RES, V162, P172, DOI 10.1007/s00221-004-2128-2
BRIGGS G G, 1975, Cortex, V11, P230
Choi BJ, 2013, PLOS ONE, V8, DOI 10.1371/journal.pone.0074583
Curtin A, 2012, IEEE ENG MED BIO, P3841, DOI 10.1109/EMBC.2012.6346805
David N, 2012, FRONT HUM NEUROSCI, V6, DOI 10.3389/fnhum.2012.00161
Diez PF, 2011, J NEUROENG REHABIL, V8, DOI 10.1186/1743-0003-8-39
Escolano C, 2012, IEEE T SYST MAN CY B, V42, P793, DOI
10.1109/TSMCB.2011.2177968
Escolano Carlos, 2010, Conf Proc IEEE Eng Med Biol Soc, V2010, P4476, DOI
10.1109/IEMBS.2010.5626045
Fisher RS, 2005, EPILEPSIA, V46, P1426, DOI 10.1111/j.1528-1167.2005.31405.x
Friedrich EVC, 2013, PLOS ONE, V8, DOI 10.1371/journal.pone.0076214
Friedrich EVC, 2011, CLIN NEUROPHYSIOL, V122, P2003, DOI
10.1016/j.clinph.2011.03.019
Friman O, 2007, IEEE T BIO-MED ENG, V54, P742, DOI 10.1109/TBME.2006.889160
Gallagher S, 2000, TRENDS COGN SCI, V4, P14, DOI 10.1016/S1364-6613(99)01417-5
Gergondet P., 2013, EXP ROBOT, V88, P215, DOI [10.1007/978-3-319-00065-7_16, DOI
10.1007/978-3-319-00065-7_16]
Guger C, 2003, IEEE T NEUR SYS REH, V11, P145, DOI 10.1109/TNSRE.2003.814481
Guger C, 2012, FRONT NEUROSCI-SWITZ, V6, DOI 10.3389/fnins.2012.00169
Guger C, 2009, NEUROSCI LETT, V462, P94, DOI 10.1016/j.neulet.2009.06.045
Haggard P, 2002, NAT NEUROSCI, V5, P382, DOI 10.1038/nn827
Leeb R, 2004, P ANN INT IEEE EMBS, V26, P4503
Leeb R, 2007, COMPUT INTEL NEUROSC, V2007, P79642, DOI DOI 10.1155/2007/79642
Leeb R, 2007, IEEE T NEUR SYS REH, V15, P473, DOI 10.1109/TNSRE.2007.906956
Menzer F, 2010, COGN NEUROSCI-UK, V1, P184, DOI 10.1080/17588921003743581
Millan J. D. R., 2010, FRONT NEUROSCI, V4, P161, DOI DOI
10.3389/FNINS.2010.00161
Moore MM, 2003, IEEE T NEUR SYS REH, V11, P162, DOI 10.1109/TNSRE.2003.814433
Naeem M, 2006, J NEURAL ENG, V3, P208, DOI 10.1088/1741-2560/3/3/003
Onose G, 2012, SPINAL CORD, V50, P599, DOI 10.1038/sc.2012.14
Pfurtscheller G, 2006, BRAIN RES, V1071, P145, DOI
10.1016/j.brainres.2005.11.083
Prueckl R., 2009, BIOINSPIRED SYST COM, P1
Sato A, 2005, COGNITION, V94, P241, DOI 10.1016/j.cognition.2004.04.003
Synofzik M, 2008, CONSCIOUS COGN, V17, P219, DOI 10.1016/j.concog.2007.03.010
Vinding MC, 2013, CONSCIOUS COGN, V22, P810, DOI 10.1016/j.concog.2013.05.003
Wolpert DM, 2012, CURR OPIN NEUROBIOL, V22, P996, DOI 10.1016/j.conb.2012.05.003
Zander TO, 2011, INT J HUM-COMPUT INT, V27, P38, DOI
10.1080/10447318.2011.535752
NR 34
TC 14
Z9 14
U1 1
U2 13
PU FRONTIERS MEDIA SA
PI LAUSANNE
PA PO BOX 110, EPFL INNOVATION PARK, BUILDING I, LAUSANNE, 1015,
SWITZERLAND
SN 1662-5218
J9 FRONT NEUROROBOTICS
JI Front. Neurorobotics
PD JUN 17
PY 2014
VL 8
DI 10.3389/fnbot.2014.00020
PG 8
WC Computer Science, Artificial Intelligence; Robotics; Neurosciences
SC Computer Science; Robotics; Neurosciences & Neurology
GA CA3ND
UT WOS:000348812600001
PM 24987350
OA gold
DA 2018-01-22
ER

PT J
AU Cosentino, S
Petersen, K
Lin, ZH
Bartolomeo, L
Sessa, S
Zecca, M
Takanishi, A
AF Cosentino, Sarah
Petersen, Klaus
Lin, Zhuohua
Bartolomeo, Luca
Sessa, Salvatore
Zecca, Massimiliano
Takanishi, Atsuo
TI Natural human-robot musical interaction: understanding the music
conductor gestures by using the WB-4 inertial measurement system
SO ADVANCED ROBOTICS
LA English
DT Article
DE human-robot interaction; inertial measurement unit; musical robot;
humanoid robot
ID RECOGNITION
AB This paper presents an inertial measurement unit-based human gesture recognition
system for a robot instrument player to understand the instructions dictated by an
orchestra conductor and accordingly adapt its musical performance. It is an
extension of our previous publications on natural human-robot musical interaction.
With this system, the robot can understand the real-time variations in musical
parameters dictated by the conductor's movements, adding expression to its
performance while being synchronized with all the other human partner musicians.
The enhanced interaction ability would obviously lead to an improvement of the
overall live performance, but also allow the partner musicians, as well as the
conductor, to better appreciate a joint musical performance, thanks to the complete
naturalness of the interaction.
C1 [Cosentino, Sarah] Waseda Univ, Grad Sch Adv Sci & Engn, Fac Sci & Engn, Tokyo,
Japan.
[Cosentino, Sarah; Petersen, Klaus; Bartolomeo, Luca; Takanishi, Atsuo] Waseda
Univ, Global Robot Acad, Tokyo, Japan.
[Petersen, Klaus; Bartolomeo, Luca; Sessa, Salvatore; Zecca, Massimiliano]
Waseda Univ, Grad Sch Creat Sci & Engn, Fac Sci & Engn, Tokyo, Japan.
[Lin, Zhuohua] S China Univ Technol, Sch Mech & Automot Engn, Guangzhou,
Guangdong, Peoples R China.
[Sessa, Salvatore] Dept Mech & Robot Engn E Just, Borg El Arab, Egypt.
[Zecca, Massimiliano; Takanishi, Atsuo] Waseda Univ, HRI, Tokyo, Japan.
[Zecca, Massimiliano; Takanishi, Atsuo] Waseda Univ, Italy Japan Joint Lab
Humanoid & Personal Robot R, Tokyo, Japan.
[Takanishi, Atsuo] Waseda Univ, Dept Modern Mech Engn, Tokyo, Japan.
RP Cosentino, S (reprint author), Waseda Univ, Grad Sch Adv Sci & Engn, Fac Sci &
Engn, Tokyo, Japan.
EM sarah.cosentino@fuji.waseda.jp
RI Sessa, Salvatore/H-4327-2013
OI Sessa, Salvatore/0000-0003-3540-8121; Zecca,
Massimiliano/0000-0003-4741-4334
FU Italian Ministry of Foreign Affairs, General Directorate for Cultural
Promotion and Cooperation; Life Performance Research, Okino Industries
LTD, Japan ROBOTECH LTD, SolidWorks Corp, Dyden
FX The authors would like to express their thanks to the Italian Ministry
of Foreign Affairs, General Directorate for Cultural Promotion and
Cooperation, for its support to RoboCasa. The authors would also like to
express their gratitude to Life Performance Research, Okino Industries
LTD, Japan ROBOTECH LTD, SolidWorks Corp, Dyden, for their support to
the research, and Professor Suzuki of the University of Tsukuba for the
meaningful discussions.
CR Bartolomeo L, 2012, INT J APPL ELECTROM, V39, P779, DOI 10.3233/JAE-2012-1542
Borja R, 2013, ROBOT AUTON SYST, V61, P153, DOI 10.1016/j.robot.2012.10.005
Chaudhary A., 2011, INT J COMPUT SCI ENG, V2, P122, DOI DOI
10.5121/IJCSES.2011.2109
Cosentino S, 2012, RSJ 2012, P4
COSENTINO S, 2012, 1 IEEE INT C INN ENG, P19
Cosentino S, 2012, 2012 IEEE INT C ROB, P30
Dalibard S., 2012, WORKSH AUT SOC ROB V
Gruebler A, 2012, ADV ROBOTICS, V26, P1143, DOI 10.1080/01691864.2012.686349
Harnum J, 2004, BASIC MUSIC THEORY R
Hoffman G., 2010, CHI 10 HUM FACT COMP, P3097
Hornyak T.N., 2006, LOVING MACHINE
Ide T, 1992, ADV ROBOTICS, V7, P189
Jean JH, 2008, SICE ANN C 2008 AUG, P1437
Jolliffe IT, 2002, SPRINGER SERIES STAT, VXXIX
Kato I., 1987, Robotics, V3, P143, DOI 10.1016/0167-8493(87)90002-7
Ko WR, 2012, LECT NOTES COMPUTER, V7429, P422
Kuo IH, 2012, SOCIAL ROBOTICS, V7621, P178
Lim A, 2012, ADV ROBOTICS, V26, P363, DOI 10.1163/156855311X614626
Lin C-Y, 2013, INT J ADV ROBOT SYST, V10, P1
Lin Z., 2010, 2010 IEEE SICE INT S, P420
Lin Z, 2012, IEEE INT C ROB BIOM, P2219
Lin ZH, 2013, IEEE T BIO-MED ENG, V60, P977, DOI 10.1109/TBME.2012.2230260
Mitra S, 2007, IEEE T SYST MAN CY C, V37, P311, DOI 10.1109/TSMCC.2007.893280
Petersen K, 2012, P IEEE RAS-EMBS INT, P937, DOI 10.1109/BioRob.2012.6290786
Petersen K, 2010, AUTON ROBOT, V28, P471, DOI 10.1007/s10514-010-9180-5
Prassler E, 2012, SERVICE ROBOTS EVERY
Randel Don Michael, 2003, HARVARD DICT MUSIC
Sessa S, 2013, J INTELL ROBOT SYST, V71, P143, DOI 10.1007/s10846-012-9772-8
Shibata T, 2011, LECT NOTES COMPUT SC, V6768, P437, DOI 10.1007/978-3-642-21657-
2_47
SOLIS J, 2009, IEEE RSJ INT C INT R, P2309
Solis J., 2007, INT ROB SYST 2007 IR, P2041
Solis J, 2013, GUIDE COMPUTING EXPR, P235
Stewart A, 2012, P IEEE, V100, P751, DOI 10.1109/JPROC.2011.2173815
Suykens JAK, 2003, ADV LEARNING THEORY
Troccaz J, 2013, MED ROBOTICS
Turaga P, 2008, IEEE T CIRC SYST VID, V18, P1473, DOI 10.1109/TCSVT.2008.2005594
Weinberg Gil, 2007, 16th IEEE International Conference on Robot and Human
Interactive Communication, P769
Weinberg G, 2009, NIME 2009 P PITTSB P
WEINBERG G, 2005, IEEE INT WORKSH ROB, P456
Zhou H, 2007, MED ENG PHYS
NR 40
TC 5
Z9 5
U1 1
U2 16
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD JUN 3
PY 2014
VL 28
IS 11
SI SI
BP 781
EP 792
DI 10.1080/01691864.2014.889577
PG 12
WC Robotics
SC Robotics
GA AF7PT
UT WOS:000334907700007
DA 2018-01-22
ER

PT J
AU Cohen, O
Druon, S
Lengagne, S
Mendelsohn, A
Malach, R
Kheddar, A
Friedman, D
AF Cohen, Ori
Druon, Sebastien
Lengagne, Sebastien
Mendelsohn, Avi
Malach, Rafael
Kheddar, Abderrahmane
Friedman, Doron
TI fMRI-Based Robotic Embodiment: Controlling a Humanoid Robot by Thought
Using Real-Time fMR1
SO PRESENCE-TELEOPERATORS AND VIRTUAL ENVIRONMENTS
LA English
DT Article
ID BRAIN-COMPUTER INTERFACE; RESONANCE-IMAGING FMRI; TETRAPLEGIA;
WHEELCHAIR; MOTIONS; WALKING
AB We present a robotic embodiment experiment based on real-time functional
magnetic resonance imaging (rt-fMRI). In this study fMRI is used as an input device
to identify a subject's intentions and convert them into actions performed by a
humanoid robot. The process, based on motor imagery, has allowed four subjects
located in Israel to control a HOAP3 humanoid robot in France, in a relatively
natural manner, experiencing the whole experiment through the eyes of the robot.
Motor imagery or movement of the left hand, the right hand, or the legs were used
to control the robotic motions of left, right, or walk forward, respectively.
C1 [Cohen, Ori; Friedman, Doron] Interdisciplinary Ctr, IL-46150 Herzliyya, Israel.
[Cohen, Ori] Bar Ilan Univ, IL-52900 Ramat Gan, Israel.
[Druon, Sebastien; Lengagne, Sebastien; Kheddar, Abderrahmane] CNRS UM2 LIRMM
UMR 5506, F-34095 Montpellier 5, France.
[Mendelsohn, Avi; Malach, Rafael] Weizmann Inst Sci, Dept Neurobiol, IL-76100
Rehovot, Israel.
[Kheddar, Abderrahmane] CNRS AIST, Joint Robot Lab, UMI3218, Tsukuba, Ibaraki
3058568, Japan.
RP Cohen, O (reprint author), Interdisciplinary Ctr, IL-46150 Herzliyya, Israel.
EM orioric@gmail.com
FU European Union [657295]
FX This research is supported by the European Union FP7 Integrated Project
VERE (No. 657295), www.vereproject.eu. We would like to thank the
subjects for helping us. We would also like to thank Dan Drai, and the
Weizmann's Institute technicians, Edna Furman-Haran, Nachum Stern, and
Fanny Attar, for helping in this experiment.
CR Bell CJ, 2008, J NEURAL ENG, V5, P214, DOI 10.1088/1741-2560/5/2/012
COHEN O, 2012, BIOMEDICAL ROBOTICS, P314, DOI DOI 10.1109/BIOROB.2012.06290866
Cohen O, 2014, J NEURAL ENG, V11, DOI 10.1088/1741-2560/11/3/035006
Decharms RC, 2008, NAT REV NEUROSCI, V9, P720, DOI 10.1038/nrn2414
Donoghue JP, 2007, J PHYSIOL-LONDON, V579, P603, DOI
10.1113/jphysiol.2006.127209
Friedman D, 2010, HUM-COMPUT INTERACT, V25, P67, DOI 10.1080/07370020903586688
Gergondet P., 2011, P IEEE INT C ROB BIO, P192
Gergondct P., 2012, P 13 INT S EXP ROB I
Hochberg LR, 2012, NATURE, V485, P372, DOI 10.1038/nature11076
Honryak T., 2006, SCI AM, V295, P30, DOI DOI 10.1038/SCIENTIFICAMERICAN0906-30
Iturrate I, 2009, IEEE T ROBOT, V25, P614, DOI 10.1109/TRO.2009.2020347
Kheddar A, 2001, IEEE T SYST MAN CY A, V31, P1, DOI 10.1109/3468.903862
Kim SP, 2011, IEEE T NEUR SYS REH, V19, P193, DOI 10.1109/TNSRE.2011.2107750
Leeb Robert, 2007, Comput Intell Neurosci, P79642, DOI 10.1155/2007/79642
Leeb R, 2006, PRESENCE-TELEOP VIRT, V15, P500, DOI 10.1162/pres.15.5.500
Lengagne S, 2013, INT J ROBOT RES, V32, P1104, DOI 10.1177/0278364913478990
Lengagne S, 2011, IEEE T ROBOT, V27, P1095, DOI 10.1109/TRO.2011.2162998
Lenhardt A, 2010, LECT NOTES COMPUT SC, V6444, P58, DOI 10.1007/978-3-642-17534-
3_8
Leuthardt EC, 2004, J NEURAL ENG, V1, P63, DOI 10.1088/1741-2560/1/2/001
Mak Joseph N, 2009, IEEE Rev Biomed Eng, V2, P187, DOI 10.1109/RBME.2009.2035356
Neuper C, 2001, INT J PSYCHOPHYSIOL, V43, P41, DOI 10.1016/S0167-8760(01)00178-7
Ortner R, 2010, LECT NOTES COMPUT SC, V6180, P85, DOI 10.1007/978-3-642-14100-
3_14
Pfurtscheller G, 2006, BRAIN RES, V1071, P145, DOI
10.1016/j.brainres.2005.11.083
Prueckl R, 2009, LECT NOTES COMPUT SC, V5517, P690, DOI 10.1007/978-3-642-02478-
8_86
Rebsamen B, 2006, P 1 IEEE RAS EMBS IN, P1101, DOI DOI
10.1109/BIOROB.2006.1639239
Rebsamen B, 2010, IEEE T NEUR SYS REH, V18, P590, DOI 10.1109/TNSRE.2010.2049862
Royer AS, 2010, IEEE T NEUR SYS REH, V18, P581, DOI 10.1109/TNSRE.2010.2077654
Sheridan TB, 1992, TELEROBOTICS AUTOMAT
Weiskopf N, 2003, NEUROIMAGE, V19, P577, DOI 10.1016/S1053-8119(03)00145-9
Weiskopf N, 2004, IEEE T BIO-MED ENG, V51, P966, DOI 10.1109/TBME.2004.827063
NR 30
TC 7
Z9 7
U1 0
U2 3
PU MIT PRESS
PI CAMBRIDGE
PA ONE ROGERS ST, CAMBRIDGE, MA 02142-1209 USA
SN 1054-7460
EI 1531-3263
J9 PRESENCE-TELEOP VIRT
JI Presence-Teleoper. Virtual Env.
PD SUM
PY 2014
VL 23
IS 3
BP 229
EP 241
DI 10.1162/PRES_a_00191
PG 13
WC Computer Science, Cybernetics; Computer Science, Software Engineering
SC Computer Science
GA AR4AC
UT WOS:000343529400003
DA 2018-01-22
ER

PT J
AU Nishi, T
Sugihara, T
AF Nishi, Toshiya
Sugihara, Tomomichi
TI Motion Planning of a Humanoid Robot in a Complex Environment Using RRT
and Spatiotemporal Post-Processing Techniques
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Humanoid robot; path planning; locomotion in complex environment; RRT
ID WALKING; GENERATION
AB This paper presents a motion planning method which provides a complete whole
body trajectory of a humanoid robot traveling in a complex environment with various
life items, steps, slopes, gates, walls, etc. A two-stage planning scheme is
employed to simplify the problem of discontinuously changing dynamical constraints,
where a sequence of double-support postures is planned in the first stage, and a
continuous and smooth trajectory interpolates it in the second stage. RRT is
utilized in the both stages in order to exploit the whole body with a large degree-
of- freedom and perform in such a three-dimensionally intricate field. An
inevitable issue of that random-sampling-based approach is that the searched
trajectory is necessarily jaggy and detouring. The proposed method includes
effective post-processing techniques for each stage, which are mandatory in
practice. In the first stage, a necessary condition is that any pair of adjacent
postures has to share one supporting foot as the pivot. A key idea is to insert
intermediate postures between a pair of non-adjacent milestones with a cross-
combination of the support feet to bypass and average a sequence of double-support
transitions. The optimal sequence of the possible transitions is found by applying
Dijkstra's method. In the second stage, an idea of the center of mass (COM)-
enforcement to artificially constrain the trajectory of COM on one based on the
mass-concentrated approximation is proposed, expecting that a solution can be found
on this manifold. A continuous and smooth trajectory which satisfies the geometric
constraint is planned by RRT and NURBS interpolation, and then, the temporal
property of the trajectory is adjusted based on the dynamic programming to satisfy
the dynamical constraint without breaking the geometric constraint. The performance
was examined in a mock field of a living room. Comparing with a naive method only
with RRT-connect, it substantially shortened the total travel distance in many
cases. It also succeeded to plan dynamically feasible walking trajectories on that
complex terrain.
C1 [Nishi, Toshiya] DENSO CORP, Informat Strategy Planning Div, Kariya, Aichi
4488661, Japan.
[Sugihara, Tomomichi] Osaka Univ, Grad Sch Engn, Dept Adapt Machine Syst, Osaka
5650871, Japan.
RP Sugihara, T (reprint author), Osaka Univ, Grad Sch Engn, Dept Adapt Machine
Syst, 2-1 Suita, Osaka 5650871, Japan.
EM zhidao@ieee.org
FU Japan Society for the Promotion of Science [20760170]; Kyushu University
Research Superstar Program (SSP)
FX This work was supported in part by Grant-in-Aid for Young Scientists(B)
#20760170, Japan Society for the Promotion of Science and by "The Kyushu
University Research Superstar Program (SSP)" based on the budget of
Kyushu University allocated under President's initiative.
CR Bouyarmane K., 2009, P IEEE INT C ROB AUT, P1165
Dalibard S., 2011, P IEEE INT C HUM ROB, P739
DIJKSTRA EW, 1959, NUMER MATH, V1, P269, DOI DOI 10.1007/BF01386390
DONALD B, 1993, J ACM, V40, P1048, DOI 10.1145/174147.174150
Escande A., 2006, P IEEE RSJ INT C INT, P2974
GILBERT EG, 1988, IEEE T ROBOTIC AUTOM, V4, P193, DOI 10.1109/56.2083
Hachour O., 2009, INT J SYST APPL ENG, V3, P117
Harada K., 2006, P IEEE INT C ROB SYS, P833
Harada K., 2007, P IEEE RSJ INT C INT, P4227
Harada K., 2008, P 2008 JSME C ROB ME
Harada K., 2009, J ROBOTICS MECHATRON, V21, P311
Harada K, 2006, INT J HUM ROBOT, V3, P1, DOI 10.1142/S0219843606000643
Harada K, 2010, IEEE-ASME T MECH, V15, P694, DOI 10.1109/TMECH.2009.2032180
Hauser K., 2005, P IEEE RAS INT C HUM, P7
Hirukawa H, 2006, IEEE INT CONF ROBOT, P1976, DOI 10.1109/ROBOT.2006.1641995
Kagami S, 2002, AUTON ROBOT, V12, P71, DOI 10.1023/A:1013210909840
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kanehara M., 2007, P IEEE INT C SYST MA, P991
Kanehiro F, 2005, IEEE INT CONF ROBOT, P1072
Kanehiro F, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P660, DOI
10.1109/IROS.2008.4650950
Karaman S, 2011, INT J ROBOT RES, V30, P846, DOI 10.1177/0278364911406761
Kavraki LE, 1996, IEEE T ROBOTIC AUTOM, V12, P566, DOI 10.1109/70.508439
Kuffner J, 2001, IEEE INT CONF ROBOT, P692, DOI 10.1109/ROBOT.2001.932631
Kuffner J. J. Jr., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P995, DOI 10.1109/ROBOT.2000.844730
Kuffner JJ, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P500, DOI
10.1109/IROS.2001.973406
Kurazume R, 2003, IEEE INT CONF ROBOT, P925
La Valle S. M., 1998, 9811 IOW STAT U COMP
Nagasaka K, 2004, IEEE INT CONF ROBOT, P3189, DOI 10.1109/ROBOT.2004.1308745
NAGASAKA K, 2000, THESIS U TOKYO
Nishi T., 2010, P 2010 IEEE RSJ INT, P1702
Pan J, 2012, INT J ROBOT RES, V31, P1155, DOI 10.1177/0278364912453186
Perrin N., 2012, P 2012 IEEE INT C RO
Perrin N, 2011, IEEE INT C INT ROBOT, P4408, DOI 10.1109/IROS.2011.6048120
SANADA H, 2007, P IEEE RSJ INT C INT, P4028
Shiller Z, 2001, IEEE INT CONF ROBOT, P1, DOI 10.1109/ROBOT.2001.932521
Shimizu Y., 2012, P 2012 IEEE RAS INT, P755
Shoemake K., 1985, ACM SIGGRAPH COMPUTE, V19, P245, DOI [DOI
10.1145/325165.325242, 10.1145/325334.325242]
Sugihara T., 2007, ROBOTICS AUTONOMOUS, V56, P82
Sugihara T, 2011, IEEE T ROBOT, V27, P984, DOI 10.1109/TRO.2011.2148230
Sugihara T, 2009, IEEE T ROBOT, V25, P658, DOI 10.1109/TRO.2008.2012336
Takanishi A., 1988, ROMANSY, V7, P68
Tazaki Y, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, P1582, DOI 10.1109/IROS.2009.5354206
Terada K., 2007, P 2007 IEEE RAS INT
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
Xia ZY, 2011, IEEE-ASME T MECH, V16, P716, DOI 10.1109/TMECH.2010.2051679
Yoshida E., 2005, P 2005 IEEE RAS INT, P1
Yoshida E., 2005, P 2005 IEEE RSJ INT, P25
NR 47
TC 3
Z9 3
U1 1
U2 13
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD JUN
PY 2014
VL 11
IS 2
AR 1441003
DI 10.1142/S0219843614410035
PG 35
WC Robotics
SC Robotics
GA AN2RE
UT WOS:000340432200004
DA 2018-01-22
ER

PT J
AU Yoshida, Y
Takeuchi, K
Miyamoto, Y
Sato, D
Nenchev, D
AF Yoshida, Yuki
Takeuchi, Kohei
Miyamoto, Yasuhiro
Sato, Daisuke
Nenchev, Dragomir
TI Postural Balance Strategies in Response to Disturbances in the Frontal
Plane and Their Implementation With a Humanoid Robot
SO IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS
LA English
DT Article
DE External disturbance; human balance analysis; humanoid robot; motion
pattern generation; postural balance control; reaction null-space method
ID BIPED WALKING; RECOVERY; PERTURBATIONS; FORCES; MOTION; IMPACT; BODY
AB We examine postural reaction and balance recovery patterns occurring when a
standing upright human is subjected to a sudden disturbance within the frontal
plane, with the aim of developing balance control strategies for humanoid robots.
Five patterns are identified and related to the magnitude of the disturbance. Three
of the patterns are modeled and implemented with a small humanoid robot HOAP-2. The
models are based on inverted-pendulum and double-pendulum equations, in combination
with variable stiffness/damping elements for ensuring appropriate reactions and
balance recovery patterns. Supporting foot reaction is minimized within the
reaction null space formulation. Special attention is paid to the transitions
between the reaction patterns. The experimental data show that the models and the
respective controllers can ensure smooth reaction control under both impact-force
and continuous-force disturbances.
C1 [Yoshida, Yuki; Takeuchi, Kohei; Miyamoto, Yasuhiro; Sato, Daisuke; Nenchev,
Dragomir] Tokyo City Univ, Grad Sch Engn, Setagaya Ku, Tokyo 1588557, Japan.
RP Yoshida, Y (reprint author), Tokyo City Univ, Grad Sch Engn, Setagaya Ku, Tokyo
1588557, Japan.
EM yyoshida@rls.mse.tcu.ac.jp; takeuchi@rls.mse.tcu.ac.jp;
miyamoto@rls.mse.tcu.ac.jp; dsato@tcu.ac.jp; nenchev@tcu.ac.jp
FU Japan Society for the Promotion of Science (JSPS) [20300072]
FX This work was supported in part by Grant-in-Aid for Scientific Research
Kiban (B) 20300072 of the Japan Society for the Promotion of Science
(JSPS). This paper was recommended by Associate Editor R. Roberts.
CR Abdallah M, 2005, IEEE INT CONF ROBOT, P1996
Carmona A., 2007, P ASME IMECE 2007 NO, P1
Choi YJ, 2007, IEEE T ROBOT, V23, P1285, DOI 10.1109/TRO.2007.904907
Featherstone R., 2008, RIGID BODY DYNAMICS
*FUJ AUT CO LTD, MIN HUM ROB HOAP 2 I
Fujimoto Y, 1998, IEEE INT CONF ROBOT, P2030, DOI 10.1109/ROBOT.1998.680613
Gorce P, 1999, IEEE T SYST MAN CY A, V29, P616, DOI 10.1109/3468.798065
Goswami A., 2010, P IEEE RSJ INT C INT, P3157
Goswami A., 2011, P IEEE RSJ INT C INT, P3943
Guihard M, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2587, DOI 10.1109/IRDS.2002.1041660
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hofmann A., 2009, P IEEE INT C ROB AUT, P4423
Horak FB, 1997, PHYS THER, V77, P517, DOI 10.1093/ptj/77.5.517
Hyon SH, 2007, IEEE T ROBOT, V23, P884, DOI 10.1109/TRO.2007.904896
Hyon SH, 2009, IEEE T ROBOT, V25, P171, DOI 10.1109/TRO.2008.2006870
IQBAL K, 1993, IEEE T BIO-MED ENG, V40, P1007, DOI 10.1109/10.247799
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kajita S., 2003, P IEEE RSJ INT C INT, V2, P1644
Kanamiya Y, 2010, IEEE INT CONF ROBOT, P3446, DOI 10.1109/ROBOT.2010.5509785
Lee SH, 2012, AUTON ROBOT, V33, P399, DOI 10.1007/s10514-012-9294-z
Mille ML, 2005, CLIN BIOMECH, V20, P607, DOI 10.1016/j.clinbiomech.2005.03.004
Mitobe K, 2004, MECHATRONICS, V14, P163, DOI 10.1016/S0957-4158(03)00028-X
Morisawa M., 2009, P IEEE RAS INT C HUM, P528
Naksuk N., 2004, P IEEE RAS INT C HUM, P576
NASHNER LM, 1985, BEHAV BRAIN SCI, V8, P135, DOI 10.1017/S0140525X00020008
Nenchev DN, 1999, IEEE T ROBOTIC AUTOM, V15, P548, DOI 10.1109/70.768186
Nenchev DN, 1999, IEEE T ROBOTIC AUTOM, V15, P1011, DOI 10.1109/70.817666
Nenchev DN, 2008, ROBOTICA, V26, P643, DOI 10.1017/S0263574708004268
NISHIO A, 2006, P 2006 IEEE INT C IN, P1996
Ott C., 2011, P IEEE RAS INT C HUM, P26
Patton J. L., 2006, P IEEE EMBS ANN INT, P3305
Pratt J., 2006, P IEEE RAS INT C HUM, P200, DOI DOI 10.1109/ICHR.2006.321385
PRINCE F, 1994, GAIT POSTURE, V2, P19, DOI 10.1016/0966-6362(94)90013-2
Rietdyk S, 1999, J BIOMECH, V32, P1149, DOI 10.1016/S0021-9290(99)00116-5
Sano A., 1990, Proceedings 1990 IEEE International Conference on Robotics and
Automation (Cat. No.90CH2876-1), P1476, DOI 10.1109/ROBOT.1990.126214
Shumway-Cook A., 1989, SEMINARS HEARING, V10, P196
Stephens B, 2007, IEEE-RAS INT C HUMAN, P589, DOI 10.1109/ICHR.2007.4813931
Stephens B. J., 2010, P 2010 IEEE RAS INT, P52
SUGIHARA T, 2002, P IEEE RSJ INT C INT, V3, P2575
Tamegaya K., 2008, P 8 IEEE RAS INT C H, P151
Ugurlu B, 2010, IEEE INT CONF ROBOT, P4218, DOI 10.1109/ROBOT.2010.5509427
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Yoshida Y., 2011, P IEEE INT C ROB BIO, P1825
NR 44
TC 11
Z9 11
U1 2
U2 12
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 2168-2216
J9 IEEE T SYST MAN CY-S
JI IEEE Trans. Syst. Man Cybern. -Syst.
PD JUN
PY 2014
VL 44
IS 6
BP 692
EP 704
DI 10.1109/TSMC.2013.2272612
PG 13
WC Automation & Control Systems; Computer Science, Cybernetics
SC Automation & Control Systems; Computer Science
GA AJ7VH
UT WOS:000337907100003
DA 2018-01-22
ER

PT J
AU Escande, A
Mansard, N
Wieber, PB
AF Escande, Adrien
Mansard, Nicolas
Wieber, Pierre-Brice
TI Hierarchical quadratic programming: Fast online humanoid-robot motion
generation
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE Inverse kinematics; redundancy; task hierarchy; humanoid robot
ID PRIORITY REDUNDANCY RESOLUTION; AVOIDING JOINT LIMITS; KINEMATIC
CONTROL; TASK; MANIPULATORS; AVOIDANCE; FRAMEWORK; CONTACTS;
OPTIMIZATION; SINGULARITY
AB Hierarchical least-square optimization is often used in robotics to inverse a
direct function when multiple incompatible objectives are involved. Typical
examples are inverse kinematics or dynamics. The objectives can be given as
equalities to be satisfied (e. g. point-to-point task) or as areas of satisfaction
(e. g. the joint range). This paper proposes a complete solution to solve multiple
least-square quadratic problems of both equality and inequality constraints ordered
into a strict hierarchy. Our method is able to solve a hierarchy of only equalities
10 times faster than the iterative-projection hierarchical solvers and can consider
inequalities at any level while running at the typical control frequency on whole-
body size problems. This generic solver is used to resolve the redundancy of
humanoid robots while generating complex movements in constrained environments.
C1 [Escande, Adrien] JRL CNRS AIST, Tsukuba, Ibaraki, Japan.
[Mansard, Nicolas] Univ Toulouse, LAAS CNRS, F-31000 Toulouse, France.
[Wieber, Pierre-Brice] INRIA Grenoble, Grenoble, France.
RP Mansard, N (reprint author), Univ Toulouse, LAAS CNRS, 7 Av Col Roche, F-31000
Toulouse, France.
EM nicolas.mansard@laas.fr
FU RobotHow.cog EU CEC project [288533]; French PSPC Romeo-2
FX This work was supported the RobotHow.cog EU CEC project (grant number
288533) under the 7th Research program (www.robohow.eu), and French PSPC
Romeo-2.
CR Antonelli G, 1998, IEEE INT CONF ROBOT, P768, DOI 10.1109/ROBOT.1998.677070
Antonelli G, 2006, IEEE T ROBOT, V22, P1285, DOI 10.1109/TRO.2006.886272
Baerlocher P, 2004, VISUAL COMPUT, V20, P402, DOI 10.1007/s00371-004-0244-4
Behringer F. A., 1977, Z OPERATIONS RES, V21, P103
Ben-Israel A., 2003, CMS BOOKS MATH
Berenson D, 2011, INT J ROBOT RES, V30, P1435, DOI 10.1177/0278364910396389
Bjorck A, 1996, NUMERICAL METHODS LE
Bouyarmane K., 2011, P IEEE INT C ROB AUT, P5246
Boyd S., 2004, CONVEX OPTIMIZATION
CHAN TF, 1995, IEEE T ROBOTIC AUTOM, V11, P286, DOI 10.1109/70.370511
Chaumette F, 2001, IEEE T ROBOTIC AUTOM, V17, P719, DOI 10.1109/70.964671
Chiaverini S, 1997, IEEE T ROBOTIC AUTOM, V13, P398, DOI 10.1109/70.585902
Chiaverini S., 2008, HDB ROBOTICS, P245
Collette C, 2007, IEEE-RAS INT C HUMAN, P81, DOI 10.1109/ICHR.2007.4813852
Decre W., 2009, P IEEE INT C ROB AUT, P964
De Lasa M, 2010, T GRAPHICS, V29, P131, DOI DOI 10.1145/1778765.1781157
DESCHUTTER J, 1988, INT J ROBOT RES, V7, P3, DOI 10.1177/027836498800700401
Escande A, 2013, 00970816 LAASCNRS HA
Escande A., 2010, P IEEE INT C ROB AUT, P3733
Faverjon B, 1987, P IEEE INT C ROB AUT, V4, P1152
Garcia-Aracil N, 2005, IEEE T ROBOT, V21, P415
GIENGER M, 2006, P IROS BEIJ CHIN OCT, P2484
GOLUB GH, 1996, MATRIX COMPUTATIONS, P256
Hanafusa H, 1981, P IFAC 8 TRIENN WORL, V4
Herdt A, 2010, ADV ROBOTICS, V24, P719, DOI 10.1163/016918610X493552
Hofmann A., 2009, P IEEE INT C ROB AUT, P4423
Isermann H, 1982, OPERATIONS RES SPEKT, V4, P223
Kanoun O, 2009, THESIS U TOULOUSE FR
Kanoun O, 2011, IEEE T ROBOT, V27, P785, DOI 10.1109/TRO.2011.2142450
KHATIB O, 1986, INT J ROBOT RES, V5, P90, DOI 10.1177/027836498600500106
KHATIB O, 1987, IEEE T ROBOTIC AUTOM, V3, P43, DOI 10.1109/JRA.1987.1087068
Khatib O, 1996, IROS 96 - PROCEEDINGS OF THE 1996 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS - ROBOTIC INTELLIGENCE INTERACTING
WITH DYNAMIC WORLDS, VOLS 1-3, P546, DOI 10.1109/IROS.1996.570849
Khatib O, 2008, SPRINGER TRAC ADV RO, V44, P303
Lee J, 2012, IEEE T ROBOT, V28, P1260, DOI 10.1109/TRO.2012.2210293
Li T, 2012, IEEE INT CONF ROBOT, P4856, DOI 10.1109/ICRA.2012.6224974
LIEGEOIS A, 1977, IEEE T SYST MAN CYB, V7, P868
Mansard N., 2009, IEEE T ROBOTICS, V25
Mansard N, 2007, IEEE INT CONF ROBOT, P3041, DOI 10.1109/ROBOT.2007.363934
Mansard N, 2007, IEEE T ROBOT, V23, P60, DOI 10.1109/TRO.2006.889487
Mansard N, 2009, IEEE T AUTOMAT CONTR, V54, P1179, DOI 10.1109/TAC.2009.2019790
Marchand E, 1998, IEEE INT CONF ROBOT, P1988, DOI 10.1109/ROBOT.1998.680607
Mordatch I, 2012, ACM T GRAPHIC, V31, DOI 10.1145/2185520.2185539
NELSON BJ, 1995, INT J ROBOT RES, V14, P255, DOI 10.1177/027836499501400304
NENCHEV DN, 1989, J ROBOTIC SYST, V6, P769, DOI 10.1002/rob.4620060607
Neo ES, 2007, IEEE T ROBOT, V23, P763, DOI 10.1109/TRO.2007.903818
Nocedal J, 2006, SPRINGER SER OPER RE, P1, DOI 10.1007/978-0-387-40065-5
Park J, 2006, IEEE INT CONF ROBOT, P1963, DOI 10.1109/ROBOT.2006.1641993
Pham Q, 2012, P ROB SCI SYST RSS 1
Raunhardt D, 2007, IEEE INT CONF ROBOT, P4414, DOI 10.1109/ROBOT.2007.364159
Remazeilles A., 2006, P IEEE RSJ INT C INT, P4297
Saab L, 2013, IEEE T ROBOT, V29, P346, DOI 10.1109/TRO.2012.2234351
Salini J, 2009, P IEEE INT C ROB AUT, P177
Samson C., 1991, ROBOT CONTROL TASK F
SENTIS L, 2007, THESIS STANFORD U US
SIAN N, 2005, IEEE-ASME T MECH, V10, P546
Siciliano B, 1991, P IEEE INT C ADV ROB, V2, P1211, DOI DOI
10.1109/ICAR.1991.240390
Stasse O, 2008, IEEE INT CONF ROBOT, P3200, DOI 10.1109/ROBOT.2008.4543698
Sung YW, 1996, J ROBOTIC SYST, V13, P275, DOI 10.1002/(SICI)1097-
4563(199605)13:5<275::AID-ROB2>3.0.CO;2-N
Wensing PM, 2013, IEEE INT CONF ROBOT, P3103, DOI 10.1109/ICRA.2013.6631008
Whitney D. E., 1972, ASME, V94, P303, DOI DOI 10.1115/1.3426611
YOSHIKAWA T, 1985, INT J ROBOT RES, V4, P3, DOI 10.1177/027836498500400201
Zanchettin AM, 2012, IEEE T ROBOT, V28, P514, DOI 10.1109/TRO.2011.2173852
NR 62
TC 67
Z9 68
U1 1
U2 16
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD JUN
PY 2014
VL 33
IS 7
BP 1006
EP 1028
DI 10.1177/0278364914521306
PG 23
WC Robotics
SC Robotics
GA AJ3ML
UT WOS:000337570600004
DA 2018-01-22
ER

PT J
AU Okamoto, T
Shiratori, T
Kudoh, S
Nakaoka, S
Ikeuchi, K
AF Okamoto, Takahiro
Shiratori, Takaaki
Kudoh, Shunsuke
Nakaoka, Shin'ichiro
Ikeuchi, Katsushi
TI Toward a Dancing Robot With Listening Capability: Keypose-Based
Integration of Lower-, Middle-, and Upper-Body Motions for Varying Music
Tempos
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Dancing robot; entertainment robot; temporal scaling
ID HUMANOID ROBOT; ENTERTAINMENT; ANIMATION
AB This paper presents the development toward a dancing robot that can listen to
and dance along with musical performances. One of the key components of this robot
is the ability to modify its dance motions with varying tempos, without exceeding
motor limitations, in the same way that human dancers modify their motions. In this
paper, we first observe human performances with varying musical tempos of the same
musical piece, and then analyze human modification strategies. The analysis is
conducted in terms of three body components: lower, middle, and upper bodies. We
assume that these body components have different purposes and different
modification strategies, respectively, for the performance of a dance. For all of
the motions of these three components, we have found that certain fixed postures,
which we call keyposes, tend to be preserved. Thus, this paper presents a method to
create motions for robots at a certain music tempo, from human motion at an
original music tempo, by using these keyposes. We have implemented these algorithms
as an automatic process and validated their effectiveness by using a physical
humanoid robot HRP-2. This robot succeeded in performing the Aizu-bandaisan dance,
one of the Japanese traditional folk dances, 1.2 and 1.5 times faster than the
tempo originally learned, while maintaining its physical constraints. Although we
are not achieving a dancing robot which autonomously interacts with varying music
tempos, we think that our method has a vital role in the dancing-to-music
capability.
C1 [Okamoto, Takahiro; Ikeuchi, Katsushi] Univ Tokyo, Tokyo 1530041, Japan.
[Shiratori, Takaaki] Microsoft Corp, Microsoft Res Asia, Beijing 100080, Peoples
R China.
[Kudoh, Shunsuke] Univ Electrocommun, Tokyo 1820021, Japan.
[Nakaoka, Shin'ichiro] Natl Inst Adv Ind Sci & Technol, Tokyo 1008921, Japan.
RP Okamoto, T (reprint author), Univ Tokyo, Tokyo 1530041, Japan.
EM tokamoto@cvl.iis.u-tokyo.ac.jp; takaakis@microsoft.com;
kudoh@is.uec.ac.jp; s.nakaoka@aist.go.jp; ki@cvl.iis.u-tokyo.ac.jp
RI Nakaoka, Shin'ichiro/M-5396-2016
OI Nakaoka, Shin'ichiro/0000-0002-2346-1251
FU JSPS KAKENHI [23240026]
FX This work was supported by JSPS KAKENHI under Grant 23240026.
CR Alankus G, 2005, COMPUT ANIMAT VIRT W, V16, P259, DOI 10.1002/cav.99
antiago C. B., 2012, INT J COMPUT INTELL, V5, P700
Aucouturier JJ, 2008, IEEE INTELL SYST, V23, P74, DOI 10.1109/MIS.2008.22
Bruderlin Armin, 1995, P 22 ANN C COMP GRAP, P97, DOI DOI 10.1145/218380.218421
Gao Q., 2010, P INT C ROB BIOM DEC, P1536
Goto A., 2005, P IEEE INT C ROB AUT, P41
Grunberg D., 2009, P INT C HYBR INF TEC, P221
Heck R, 2006, COMPUT GRAPH FORUM, V25, P459, DOI 10.1111/j.1467-
8659.2006.00965.x
Hsu E, 2005, ACM T GRAPHIC, V24, P1082, DOI 10.1145/1073204.1073315
IKEUCHI K, 1994, IEEE T ROBOTIC AUTOM, V10, P368, DOI 10.1109/70.294211
Inamura T., 2003, P INT C HUM ROB
Inamura T., 2003, P AD MOT AN MACH MAR
INAMURA T, 2003, P INT C INT ROB SYST, V2, P1487
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Kaneko K, 2009, P IEEE RAS INT C HUM, P7, DOI DOI 10.1109/ICHR.2009.5379537
Kang SB, 1997, IEEE T ROBOTIC AUTOM, V13, P81, DOI 10.1109/70.554349
Kawade M., 1993, P IEEE INT C ROB AUT, V2, P688
Kosuge K., 2003, P INT C INT ROB SYST, V3, P3459
Kosuge K., 2011, SICE J CONTROL MEAS, V1, P74
Kozima H, 2009, INT J SOC ROBOT, V1, P3, DOI 10.1007/s12369-008-0009-8
Kuroki Y, 2003, IEEE INT CONF ROBOT, P471
Lee HC, 2005, COMPUT GRAPH FORUM, V24, P353, DOI 10.1111/j.1467-
8659.2005.00860.x
Lee J, 1999, COMP GRAPH, P39
McCann J., 2006, P ACM SIGGRAPH EUROG, P205
Miyamoto H, 1998, NEURAL NETWORKS, V11, P1331, DOI 10.1016/S0893-6080(98)00062-8
Mizumoto T., 2010, P IEEE RSJ 2010 WORK, P159
Murata K, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT
SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P2459, DOI 10.1109/IROS.2008.4650596
Nakaoka S, 2007, INT J ROBOT RES, V26, P829, DOI 10.1177/0278364907079430
Nishiwaki K., 2002, P IEEE RSJ INT C INT, V3, P2684
Ogawara K, 2003, IEEE T IND ELECTRON, V50, P667, DOI 10.1109/TIE.2003.814765
Okamoto T., 2010, P INT C INT ROB SYST, P2256
Oliveira J., 2008, P INT C DIG ARTS, P52
Sakagami Y., 2002, P IEEE RSJ INT C INT, V3, p2478 , DOI DOI
10.1109/IRDS.2002.1041641
Santos CC, 2011, ENSINO, P1, DOI 10.14195/978-989-26-0237-0
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Shiratori T, 2004, SIXTH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND
GESTURE RECOGNITION, PROCEEDINGS, P857, DOI 10.1109/AFGR.2004.1301641
Shiratori T., 2007, P INT C INT ROB SYST, P3251
Shiratori T., 2008, IEEE T COMPUT VIS IM, V1, P34
Shiratori T, 2006, COMPUT GRAPH FORUM, V25, P449, DOI 10.1111/j.1467-
8659.2006.00964.x
Suehiro T., 1992, P INT C INT ROB SYST, V3, P2095
Sun J., 2012, P AUT CONTR ART INT, P287
Takamatsu J, 2006, IEEE T ROBOT, V22, P65, DOI 10.1109/TRO.2005.855988
Tanaka F, 2005, INT C DEVEL LEARN, P142
Tanaka F., 2004, P 13 IEEE INT WORKSH, P419
Ude A, 1999, ROBOT AUTON SYST, V28, P163, DOI 10.1016/S0921-8890(99)00014-7
Xia G., 2012, P 11 INT C AUT AG MU, V1, P205
Yoshii K., 2007, P IEEE RSJ INT C INT
NR 47
TC 5
Z9 6
U1 2
U2 12
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD JUN
PY 2014
VL 30
IS 3
BP 771
EP 778
DI 10.1109/TRO.2014.2300212
PG 9
WC Robotics
SC Robotics
GA AI8CR
UT WOS:000337133700023
OA gold
DA 2018-01-22
ER

PT J
AU Noda, K
Arie, H
Suga, Y
Ogata, T
AF Noda, Kuniaki
Arie, Hiroaki
Suga, Yuki
Ogata, Tetsuya
TI Multimodal integration learning of robot behavior using deep neural
networks
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Object manipulation; Multimodal integration; Cross-modal memory
retrieval; Deep learning
ID MEMORY-IMPAIRED INDIVIDUALS; NEUROLOGICAL DAMAGE; RECOGNITION
AB For humans to accurately understand the world around them, multimodal
integration is essential because it enhances perceptual precision and reduces
ambiguity. Computational models replicating such human ability may contribute to
the practical use of robots in daily human living environments; however, primarily
because of scalability problems that conventional machine learning algorithms
suffer from, sensory-motor information processing in robotic applications has
typically been achieved via modal-dependent processes. In this paper, we propose a
novel computational framework enabling the integration of sensory-motor time-series
data and the self-organization of multimodal fused representations based on a deep
learning approach. To evaluate our proposed model, we conducted two behavior-
learning experiments utilizing a humanoid robot; the experiments consisted of
object manipulation and bell-ringing tasks. From our experimental results, we show
that large amounts of sensory-motor information, including raw RGB images, sound
spectrums, and joint angles, are directly fused to generate higher-level multimodal
representations. Further, we demonstrated that our proposed framework realizes the
following three functions: (1) cross-modal memory retrieval utilizing the
information complementation capability of the deep autoencoder; (2) noise-robust
behavior recognition utilizing the generalization capability of multimodal
features; and (3) multimodal causality acquisition and sensory-motor prediction
based on the acquired causality. (C) 2014 The Authors. Published by Elsevier B.V.
C1 [Noda, Kuniaki; Arie, Hiroaki; Suga, Yuki; Ogata, Tetsuya] Waseda Univ, Grad Sch
Fundamental Sci & Engn, Dept Intermedia Art & Sci, Shinjuku Ku, Tokyo 1698555,
Japan.
RP Noda, K (reprint author), Waseda Univ, Grad Sch Fundamental Sci & Engn, Dept
Intermedia Art & Sci, Shinjuku Ku, 3-4-1 Okubo, Tokyo 1698555, Japan.
EM kuniaki.noda@akane.waseda.jp; arie@aoni.waseda.jp; ysuga@ysuga.net;
ogata@waseda.jp
OI Ogata, Tetsuya/0000-0001-7015-0379
FU JST PRESTO "Information Environment and Humans"; MEXT [24119003]
FX This work has been supported by JST PRESTO "Information Environment and
Humans" and MEXT Grant-in-Aid for Scientific Research on Innovative
Areas "Constructive Developmental Science" (24119003).
CR Aldebaran Robotics, 2012, NAO HUM
Bekkerman R., 2011, SCALING MACHINE LEAR
Bengio Y, 2009, FOUND TRENDS MACH LE, V2, P1, DOI 10.1561/2200000006
Brooks RA, 1998, FIFTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-
98) AND TENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICAL INTELLIGENCE
(IAAI-98) - PROCEEDINGS, P961
Chuck Rosenberg, 2013, IMPROVING PHOTO SEAR
Coen M.H., 2001, INT JOINT C ARTIF IN, V17, P1417
Deneve S, 2004, J PHYSIOLOGY-PARIS, V98, P249, DOI
10.1016/j.jphysparis.2004.03.011
Ernst MO, 2004, TRENDS COGN SCI, V8, P162, DOI 10.1016/j.tics.2004.02.002
Franc V., STAT PATTERN RECOGNI
Gallagher I. I., 2000, TRENDS COGN SCI, V4, P14, DOI DOI 10.1016/S1364-
6613(99)01417-5
Grilli MD, 2011, J INT NEUROPSYCH SOC, V17, P929, DOI 10.1017/S1355617711000737
Grilli MD, 2010, NEUROPSYCHOLOGY, V24, P698, DOI 10.1037/a0020318
Hinton GE, 2006, SCIENCE, V313, P504, DOI 10.1126/science.1127647
Hinton G, 2012, IEEE SIGNAL PROC MAG, V29, P82, DOI 10.1109/MSP.2012.2205597
Jauffret A., 2012, P 12 INT C SIM AD BE, V7426, P136
Dewey J., 1896, PSYCHOL REV, V3, P357
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Kawabe T, 2013, P ROY SOC B-BIOL SCI, V280, DOI 10.1098/rspb.2013.0991
Krizhevsky A., 2011, P 19 EUR S ART NEUR
Krizhevsky A., 2012, ADV NEURAL INFORM PR, V25, P1106
Kuriyama T., 2010, P 10 INT C EP ROB OR, P57
LANG KJ, 1990, NEURAL NETWORKS, V3, P23, DOI 10.1016/0893-6080(90)90044-L
Le Quoc V., 2012, P 29 INT C MACH LEAR, V1, P81
Lecun Y, 1998, P IEEE, V86, P2278, DOI 10.1109/5.726791
Martens James, 2011, P 28 INT C MACH LEAR, P1033
Martens J., 2010, P 27 INT C MACH LEAR, P735
Murphy R., 2000, INTRO AI ROBOTICS
Ngiam J., 2011, P 28 INT C MACH LEAR, P689
Ogino M, 2006, ROBOT AUTON SYST, V54, P414, DOI 10.1016/j.robot.2006.01.005
PEARLMUTTER BA, 1994, NEURAL COMPUT, V6, P147, DOI 10.1162/neco.1994.6.1.147
Pitto A., 2012, P IEEE INT C DEV LEA, P1
Pouget A, 2002, NAT REV NEUROSCI, V3, P741, DOI 10.1038/nrn914
Robert Hof, 2013, MEET GUY WHO HELPED
Sakagami Y., 2002, IEEE RSJ INT C INT R, V3, P2478, DOI DOI
10.1109/IRDS.2002.1041641
SAUSER EL, 2006, P 2006 IEEE RSJ INT, P5619
Schraudolph NN, 2002, NEURAL COMPUT, V14, P1723, DOI 10.1162/08997660260028683
Srivastava Nitish, 2012, P ADV NEUR INF PROC, V25, P2231
Stein BE, 1993, MERGING SENSES
Sutskever I., 2011, P 28 INT C MACH LEAR, P1017
Willow Grarage, PERSONAL ROBOT 2 PR2
NR 40
TC 26
Z9 27
U1 3
U2 65
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD JUN
PY 2014
VL 62
IS 6
BP 721
EP 736
DI 10.1016/j.robot.2014.03.003
PG 16
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA AH9NF
UT WOS:000336468400002
OA gold
DA 2018-01-22
ER

PT J
AU Lee, D
Nakamura, Y
AF Lee, Dongheui
Nakamura, Yoshihiko
TI Motion recognition and recovery from occluded monocular observations
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Statistical inference; Motion recognition; Motion recovery; Motion
capturing; Optical flow; Particle filter; Monocular vision
ID SINGLE UNCALIBRATED IMAGE; HIDDEN MARKOV-MODELS; 3D HUMAN POSE;
BIOLOGICAL MOTION; PREMOTOR CORTEX; TRACKING PEOPLE; HUMANOID ROBOT;
MIMESIS MODEL; BODY POSE; PERCEPTION
AB This paper proposes a method for 3D whole-body motion recovery and motion
recognition from a sequence of occluded monocular camera images based on
statistical inference using a motion database. In the motion database, each motion
primitive (e.g., walk, kick, etc.) is represented in an abstract statistical form.
Instead of extracting rich information by expensive computation of image
processing, we propose an inference mechanism from low level image features (e.g.,
optical flow), inspired by psychological research on how humans perceive motion.
The proposed inference mechanism recovers the 3D body configuration and finds the
closest motion primitive in the motion database. Observations in 2D camera image
space can be recognized even though the motion database is prepared in a different
space (such as joint space) by coordinate transformation of the statistical motion
representation. The approach is view invariant since the demonstrator's baselink
position and orientation with respect to camera coordinates are tracked using an
extended particle filter. Finally, an experimental evaluation of the presented
concepts using a 56-degree-of-freedom articulated human model is discussed. (C)
2014 Elsevier B.V. All rights reserved.
C1 [Lee, Dongheui] Tech Univ Munich, Inst Automat Control Engn, D-80290 Munich,
Germany.
[Nakamura, Yoshihiko] Univ Tokyo, Dept Mechanoinformat, Tokyo 1138654, Japan.
RP Lee, D (reprint author), Tech Univ Munich, Inst Automat Control Engn, D-80290
Munich, Germany.
EM dhlee@tum.de; nakamura@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102
FU Promoting Science and Technology, "IRT Foundation to Support Man and
Aging Society"; Technical University Munich, Institute for Advanced
Study; German Excellence Initiative
FX This research is partially supported by Special Coordination Funds for
Promoting Science and Technology, "IRT Foundation to Support Man and
Aging Society" and Technical University Munich, Institute for Advanced
Study, funded by the German Excellence Initiative.
CR Agarwal A, 2006, IEEE T PATTERN ANAL, V28, P44, DOI 10.1109/TPAMI.2006.21
Agarwal A, 2004, PROC CVPR IEEE, P882
Aggarwal JK, 1999, COMPUT VIS IMAGE UND, V73, P428, DOI 10.1006/cviu.1998.0744
Barron C, 2001, COMPUT VIS IMAGE UND, V81, P269, DOI 10.1006/cviu.2000.0888
Belongie S, 2002, IEEE T PATTERN ANAL, V24, P509, DOI 10.1109/34.993558
Bentivegna D.C., 2000, IEEE RAS INT C HUM R
Bentivegna D.C., 2006, IEEE RSJ INT C INT R, P4994
Billard A, 2001, ROBOT AUTON SYST, V37, P145, DOI 10.1016/S0921-8890(01)00155-5
BLIMES JA, 1997, ICSITR97021 U BERK
BRAND M, 1999, ICCV, P1237
Bregler C, 2004, INT J COMPUT VISION, V56, P179, DOI
10.1023/B:VISI.0000011203.00237.9b
Bregler C, 1998, PROC CVPR IEEE, P8, DOI 10.1109/CVPR.1998.698581
Calinon S, 2010, IEEE ROBOT AUTOM MAG, V17, P44, DOI 10.1109/MRA.2010.936947
CEDRAS C, 1995, IMAGE VISION COMPUT, V13, P129, DOI 10.1016/0262-8856(95)93154-K
CHOMAT O, 2000, EUR C COMP VIS, P487
CUTTING JE, 1977, B PSYCHONOMIC SOC, V9, P353, DOI 10.3758/BF03337021
Dariush B, 2008, IEEE INT CONF ROBOT, P2677, DOI 10.1109/ROBOT.2008.4543616
Davis JW, 1997, PROC CVPR IEEE, P928, DOI 10.1109/CVPR.1997.609439
Demircan E, 2008, ADVANCES IN ROBOT KINEMATICS: ANALYSIS AND DESIGN, P263, DOI
10.1007/978-1-4020-8600-7_28
Deutscher J, 2005, INT J COMPUT VISION, V61, P185, DOI
10.1023/B:VISI.0000043757.18370.9c
DISSANAYAKE G, 2001, IEEE T ROBOTIC AUTOM, V17, P229, DOI DOI 10.1109/70.938381
Dittrich WH, 1996, PERCEPTION, V25, P727, DOI 10.1068/p250727
Donald M., 1991, ORIGINS MODERN MIND
FATHI A, 2007, ICCV, P1
Fleet D., 2005, MATH MODELS COMPUTER, P238
Fod A, 2002, AUTON ROBOT, V12, P39, DOI 10.1023/A:1013254724861
Forsyth DA, 2005, FOUND TRENDS COMPUT, V1, P77, DOI 10.1561/0600000005
Gallese V, 1996, BRAIN, V119, P593, DOI 10.1093/brain/119.2.593
Ghahramani Z., 1993, ADV NEURAL INFORM PR, V6, P120
HAYES G, 1994, INT S INT ROB SYST G, P198
Howe NR, 2000, ADV NEUR IN, V12, P820
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Inamura T., 2003, IEEE RSJ INT C HUM R, p1b
INAMURA T, 2005, IEEE RAS INT C HUM R, P469
Janus B., 2005, 12 IEEE INT C ADV RO, P411
JOHANSSON G, 1975, SCI AM, V232, P76, DOI 10.1038/scientificamerican0675-76
JUANG BH, 1985, AT&T TECH J, V64, P391, DOI 10.1002/j.1538-7305.1985.tb00439.x
Just A., 2004, BRIT MACH VIS C
KADONE H, 2005, INT C INT ROB SYST, P2900
Kohlmorgen J, 2002, ADV NEUR IN, V14, P793
KULIC D, 2008, IEEE RJS INT C INT R, P2860
Kulic D, 2008, INT J ROBOT RES, V27, P761, DOI 10.1177/0278364908091153
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
Kurihara K, 2002, IEEE ICRA 2002, V2, P1241
Lee D., 2007, IEEE RSJ INT C INT R, P617
Lee D., 2006, IEEE RSJ INT C INT R, P4994
Lee D., 2007, 12 ROB S MARCH, P424
LEE D, 2005, IEEE RSJ INT C INT R, P1911
Lee D, 2006, IEEE T IND ELECTRON, V53, P1737, DOI 10.1109/TIE.2006.881949
Lee D, 2011, AUTON ROBOT, V31, P115, DOI 10.1007/s10514-011-9234-3
Lee D, 2010, INT J ROBOT RES, V29, P1684, DOI 10.1177/0278364910364164
Lee D, 2010, INT J ROBOT RES, V29, P60, DOI 10.1177/0278364909342282
Lu WL, 2009, IMAGE VISION COMPUT, V27, P189, DOI 10.1016/j.imavis.2008.02.008
Lu W.L., 2006, COMP ROB VIS 2006 3, P6
Lucas B. D., 1981, P IM UND WORKSH, V130, P121
MATHER G, 1994, P ROY SOC B-BIOL SCI, V258, P273, DOI 10.1098/rspb.1994.0173
Moeslund TB, 2006, COMPUT VIS IMAGE UND, V104, P90, DOI
10.1016/j.cviu.2006.08.002
Mori G, 2002, LECT NOTES COMPUT SC, V2352, P666
Nakaoka S., 2005, IEEE RSJ INT C INT R, P2769
Nickel K., 2006, IMAGE VISION COMPUT, V3, P1875
Ott Ch., 2008, IEEE RAS INT C HUM R, P399
Poppe R, 2007, COMPUT VIS IMAGE UND, V108, P4, DOI 10.1016/j.cviu.2006.10.016
RABINER LR, 1989, P IEEE, V77, P257, DOI 10.1109/5.18626
Ramanan D, 2005, PROC CVPR IEEE, P271
REN L, 2004, CMUCS04165
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
Rosales R, 2000, PROC CVPR IEEE, P721, DOI 10.1109/CVPR.2000.854946
Shechtman E, 2007, IEEE T PATTERN ANAL, V29, P2045, DOI 10.1109/TPAMI.2007.1119
Shi J, 1994, IEEE C COMP VIS PATT
Shon AP, 2007, IEEE INT CONF ROBOT, P2847, DOI 10.1109/ROBOT.2007.363903
Sidenbladh H, 2002, LECT NOTES COMPUT SC, V2350, P784
Sminchisescu C, 2003, INT J ROBOT RES, V22, P371, DOI
10.1177/0278364903022006003
Song Y, 2001, COMPUT VIS IMAGE UND, V81, P303, DOI 10.1006/cviu.2000.0890
Spies H, 2001, EIGHTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOL I,
PROCEEDINGS, P587, DOI 10.1109/ICCV.2001.937571
SUMI S, 1984, PERCEPTION, V13, P283, DOI 10.1068/p130283
Takano W., 2005, 12 IEEE INT C ADV RO, P424
Takano W., 2006, IEEE RAS INT C HUM R, P425
Takano W, 2006, IEEE INT CONF ROBOT, P3602, DOI 10.1109/ROBOT.2006.1642252
Taylor CJ, 2000, PROC CVPR IEEE, P677, DOI 10.1109/CVPR.2000.855885
Tokuda K., 2003, EUR C SPEECH COMM TE, P1
Veltkamp R., 1999, UUCS199927
Wachter S, 1999, COMPUT VIS IMAGE UND, V74, P174, DOI 10.1006/cviu.1999.0758
Weinland D., 2007, ICCV, V11, P1
Yamato J, 1992, IEEE C COMP VIS PATT, P379
Yang HD, 2006, LECT NOTES COMPUT SC, V3832, P619
Yang HD, 2007, IEEE T ROBOT, V23, P256, DOI 10.1109/TRO.2006.889491
Zhang LC, 2012, IEEE INT C INT ROBOT, P2389, DOI 10.1109/IROS.2012.6385968
NR 87
TC 3
Z9 3
U1 0
U2 8
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD JUN
PY 2014
VL 62
IS 6
BP 818
EP 832
DI 10.1016/j.robot.2014.02.002
PG 15
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA AH9NF
UT WOS:000336468400008
DA 2018-01-22
ER
PT J
AU Kamide, H
Mae, Y
Takubo, T
Ohara, K
Arai, T
AF Kamide, Hiroko
Mae, Yasushi
Takubo, Tomohito
Ohara, Kenichi
Arai, Tatsuo
TI Direct comparison of psychological evaluation between virtual and real
humanoids: Personal space and subjective impressions
SO INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES
LA English
DT Article
DE Psychological evaluation; Personal space; Subjective impressions;
Virtual reality; Humanoid robot
ID BUFFER ZONE; ROBOT; EMOTION; TOUCH
AB The aim of this study was to compare psychological evaluations of a robot
constructed using a virtual reality (VR) system (VR robot) with a real robot. The
same design was used for both the VR and real robot in order to make a direct
comparison. For the psychological evaluation, we measured behavioral reactions (the
amount of personal space the participants desired between themselves and the robot)
and subjective impressions (from a psychological scale). The psychological scale
included six dimensions that are typically used to evaluate a humanoid: utility,
clumsiness of motion, possibility of communication, controllability, vulnerability,
and objective hardness. Sixty-one participants observed both the VR and real robots
walking toward them and reported their level of desired personal space. Next, the
participants evaluated their psychological impressions of the robots. The results
indicated no significant difference in the level of desired personal space between
the situations with the real and VR robots. However, regarding the psychological
dimensions, participants reported higher scores for utility and the possibility of
communication, and lower scores for controllability for the real robot as compared
with the VR robot. The usability of a VR robot is discussed. (C) 2014 Elsevier Ltd.
All rights reserved.
C1 [Kamide, Hiroko; Mae, Yasushi; Arai, Tatsuo] Osaka Univ, Grad Sch Engn Sci,
Toyonaka, Osaka 5608531, Japan.
[Takubo, Tomohito] Osaka City Univ, Grad Sch Engn, Sumiyoshi Ku, Osaka 5588585,
Japan.
[Ohara, Kenichi] Meijo Univ, Fac Sci & Technol Org, Tempaku Ku, Nagoya, Aichi
4680073, Japan.
RP Kamide, H (reprint author), Osaka Univ, Grad Sch Engn Sci, 1-3 Machikaneyamacho,
Toyonaka, Osaka 5608531, Japan.
EM kamide@arai-lab.sys.es.osaka-u.ac.jp
FU Global COE Program "Center of Human-Friendly Robotics Based on Cognitive
Neuroscience" of the Ministry of Education, Culture, Sports, Science and
Technology, Japan
FX We would like to express our appreciation to Honda R&D Co., Ltd. for
cooperation to use a humanoid robot ASIMO. This research was supported
in part by Global COE Program "Center of Human-Friendly Robotics Based
on Cognitive Neuroscience" of the Ministry of Education, Culture,
Sports, Science and Technology, Japan.
CR Becker-Asano C, 2011, AI SOC, V26, P291, DOI 10.1007/s00146-010-0306-2
Bejczy AK, 1996, ETFA '96 - 1996 IEEE CONFERENCE ON EMERGING TECHNOLOGIES AND
FACTORY AUTOMATION, PROCEEDINGS, VOLS 1 AND 2, P7, DOI 10.1109/ETFA.1996.573248
Breazeal C, 2003, INT J HUM-COMPUT ST, V59, P119, DOI 10.1016/S1071-
5819(03)00018-1
BURGOON JK, 1991, J NONVERBAL BEHAV, V15, P233, DOI 10.1007/BF00986924
CRUZNEIRA C, 1993, P 20 ANN C COMP GRAP, P135, DOI DOI 10.1145/166117.166134
Di Salvo C. F., 2002, P 4 C DES INT SYST P, P321
Endo N, 2008, IEEE INT CONF ROBOT, P2140, DOI 10.1109/ROBOT.2008.4543523
Fletcher B, 1996, OCEANS '96 MTS/IEEE, CONFERENCE PROCEEDINGS, VOLS 1-3 /
SUPPLEMENTARY PROCEEDINGS, P65, DOI 10.1109/OCEANS.1996.572546
Gockley R., 2007, P ACM IEEE INT C HUM, DOI [DOI 10.1145/1228716.1228720,
10.1145/1228716.1228720]
Gockley R., 2006, P 1 ANN C HUM ROB IN, P150, DOI DOI 10.1145/1121241.1121268
Goetz J, 2003, RO-MAN 2003: 12TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, PROCEEDINGS, P55
HALL Edward T., 1966, HIDDEN DIMENSION
Hayashi K, 2012, ROBOTICS: SCIENCE AND SYSTEMS VII, P121
HOROWITZ MJ, 1964, ARCH GEN PSYCHIAT, V11, P651
JOHNSON KL, 1991, J NONVERBAL BEHAV, V15, P43, DOI 10.1007/BF00997766
Kamide Hiroko, 2010, P 2010 IEEE RSJ INT
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kanda T, 2008, IEEE T ROBOT, V24, P725, DOI 10.1109/TRO.2008.921566
Kiesler S, 2008, SOC COGNITION, V26, P169, DOI 10.1521/soco.2008.26.2.169
KINZEL AF, 1970, AM J PSYCHIAT, V127, P59, DOI 10.1176/ajp.127.1.59
Koay K.L., 2007, P AAAI 2007 SPRING S
KOMORITA SS, 1963, J SOC PSYCHOL, V61, P327, DOI 10.1080/00224545.1963.9919489
Kushida Kazutaka, 2005, P 4 INT JOINT C AUT, P23
Liu CY, 2012, METHODS MOL BIOL, V836, P285, DOI 10.1007/978-1-61779-498-8_18
MATELL MS, 1972, J APPL PSYCHOL, V56, P506, DOI 10.1037/h0033601
Mutlu B., 2012, ACM T INTERACTIVE IN, V1, DOI [10.1145/2070719.2070725, DOI
10.1145/2070719.2070725]
Muto Y., 2009, P IEEE INT S ROB HUM
Nakano Y.I., 2003, P 41 ANN M ASS COMP, P553
Negi S, 2008, 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, VOLS 1 AND 2, P719, DOI 10.1109/ROMAN.2008.4600752
Nomura T, 2004, RO-MAN 2004: 13TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, PROCEEDINGS, P35, DOI 10.1109/ROMAN.2004.1374726
Nomura T, 2003, RO-MAN 2003: 12TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, PROCEEDINGS, P373
Nomura T., 2005, P 14 IEEE INT WORKSH, P125, DOI DOI 10.1109/R0MAN.2005.1513768
Ohara K., 2009, P 18 IEEE INT S ROB, P873
Oskoei M.A., 2010, P 2 INT S NEW FRONT
Pacchierotti E., 2005, P INT C FIELD SERV R, P476
Nass C., 1996, MEDIA EQUATION PEOPL
Research Committee on Human Friendly Robot, 1998, ROBOT SOC JPN, V16, P288
Sakagami Y, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2478, DOI 10.1109/IRDS.2002.1041641
Shiomi M., 2012, SIMULATION MODELING, V7628, P185
Tojo T, 2000, IEEE SYS MAN CYBERN, P858, DOI 10.1109/ICSMC.2000.885957
Ujiie Y, 2006, P INT S SYST HUM SCI
Walters M. L., 2005, P IEEE INT WORKSH RO, P347
Yanagihara Y, 1996, RO-MAN '96 - 5TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND
HUMAN COMMUNICATION, PROCEEDINGS, P38, DOI 10.1109/ROMAN.1996.568647
Zlotowski J., 2012, P 7 ACM IEEE INT C H
NR 44
TC 6
Z9 8
U1 2
U2 11
PU ACADEMIC PRESS LTD- ELSEVIER SCIENCE LTD
PI LONDON
PA 24-28 OVAL RD, LONDON NW1 7DX, ENGLAND
SN 1071-5819
EI 1095-9300
J9 INT J HUM-COMPUT ST
JI Int. J. Hum.-Comput. Stud.
PD MAY
PY 2014
VL 72
IS 5
BP 451
EP 459
DI 10.1016/j.ijhcs.2014.01.004
PG 9
WC Computer Science, Cybernetics; Ergonomics; Psychology, Multidisciplinary
SC Computer Science; Engineering; Psychology
GA AG7UV
UT WOS:000335625200001
DA 2018-01-22
ER

PT J
AU Kanehiro, F
Yoshida, E
Yokoi, K
AF Kanehiro, Fumio
Yoshida, Eiichi
Yokoi, Kazuhito
TI Efficient reaching motion planning method for low-level autonomy of
teleoperated humanoid robots
SO ADVANCED ROBOTICS
LA English
DT Article
DE motion planning; inverse kinematics; humanoid; RRT
AB This paper addresses an efficient reaching motion planning method tailored for
teleoperated humanoid robots in complex environments. This method offers low-level
autonomy that allows the robot to autonomously plan and execute simple tasks, thus
making teleoperation easy. Efficiency is achieved by combining the phases of
planning and execution. The planning phase quickly decides on a reaching motion by
approximating mass distribution which enables analytical solutions of inverse
kinematics. The execution phase executes the planned path while compensating for
the approximation error, as long as other constraints are maintained. Simulations
confirm that (1) a reaching motion is planned in approximately one second for the
HRP-2 humanoid robot with 30 degrees of freedom in a constrained environment with
pipes and (2) the execution is done in real-time.
C1 [Kanehiro, Fumio; Yoshida, Eiichi; Yokoi, Kazuhito] Natl Inst Adv Ind Sci &
Technol, Tsukuba, Ibaraki 3058568, Japan.
RP Kanehiro, F (reprint author), Natl Inst Adv Ind Sci & Technol, Tsukuba Cent 2,1-
1-1 Umezono, Tsukuba, Ibaraki 3058568, Japan.
EM f-kanehiro@aist.go.jp
RI Yoshida, Eiichi/M-3756-2016; Yokoi, Kazuhito/K-2046-2012; Kanehiro,
Fumio/L-8660-2016
OI Yoshida, Eiichi/0000-0002-3077-6964; Yokoi,
Kazuhito/0000-0003-3942-2027; Kanehiro, Fumio/0000-0002-0277-3467
CR Kanoun O, 2011, IEEE T ROBOT, V27, P785, DOI 10.1109/TRO.2011.2142450
Kavraki LE, 1996, IEEE T ROBOTIC AUTOM, V12, P566, DOI 10.1109/70.508439
Kuffner JJ, 2002, AUTON ROBOT, V12, P105, DOI 10.1023/A:1013219111657
La Valle S. M., 1998, 9811 IOW STAT U COMP
Yoshida E, 2008, IEEE T ROBOT, V24, P1186, DOI 10.1109/TRO.2008.2002312
NR 5
TC 5
Z9 5
U1 0
U2 4
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD APR 3
PY 2014
VL 28
IS 7
SI SI
BP 433
EP 439
DI 10.1080/01691864.2013.876931
PG 7
WC Robotics
SC Robotics
GA AB7TX
UT WOS:000331994700002
DA 2018-01-22
ER

PT J
AU Shirafuji, S
Ikemoto, S
Hosoda, K
AF Shirafuji, Shouhei
Ikemoto, Shuhei
Hosoda, Koh
TI Development of a tendon-driven robotic finger for an anthropomorphic
robotic hand
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE humanoid hand; tendon-driven robotic finger; Anthropomorphic robotic
hand design
ID EXTENSOR MECHANISM; INDEX FINGER; BIOMECHANICAL ANALYSIS; INTRINSIC
MUSCLES; DYNAMIC-MODEL; DESIGN; COORDINATION; FORCE; MOVEMENTS
AB Our paper proposes a tendon-driven robotic finger based on an anatomical model
of a human finger and a suitable method for its analysis. Our study aims to realize
an anthropomorphic robotic hand that has the same characteristics and dexterity as
that of a human hand, and it also aims to identify the advantages of the human
musculoskeletal structure for application to the design and control of robot
manipulators. When designing an anthropomorphic robotic hand, several devices are
required to apply the human finger structure to a tendon-driven robotic finger.
Reasons for this include that one of the human finger muscles, namely, the
lumbrical muscle, is situated between tendons, which is an unfavorable
configuration for the tendon-driven mechanism. Second, unlike a standard pulley
used in a tendon-driven mechanism, some moment arms of the human finger change
nonlinearly according to the joint angle. In our robotic finger design, we address
these difficulties by rearranging its tendons and develop a mechanism to change the
moment arm. We also propose a method to analyze and control this robotic fingers
coordinating joints using non-stretch branching tendons based on the human extensor
mechanism with a virtual tendon Jacobian matrix and the advantage is that this
constraint virtually reduces the degrees-of-freedom (DOF) of the mechanism.
Further, we build a prototype to confirm its motion using this method. In addition,
we show that the state with the reduced DOF can be lost by external forces acting
on the mechanism, and this condition can be changed manually by adjusting the
tendon forces. This makes it possible to control the virtual DOFs to satisfy the
requirements of the task. Finally, we discuss the benefits from anthropomorphic
structures including the tendon arrangement, which mimic the human lumbrical
muscle, and the above mentioned mechanism with non-linear moment arms from the
perspective that there are two states of DOFs. These insights may provide new
perspectives in the design of robotic hands.
C1 [Shirafuji, Shouhei; Ikemoto, Shuhei; Hosoda, Koh] Osaka Univ, Grad Sch Informat
Sci & Technol, Dept Multimedia Engn, Suita, Osaka 5650871, Japan.
RP Shirafuji, S (reprint author), Osaka Univ, Grad Sch Informat Sci & Technol, Dept
Multimedia Engn, 2-1 Yamadaoka, Suita, Osaka 5650871, Japan.
EM shirafuji.shouhei@ist.osaka-u.ac.jp
FU JSPS KAKENHI [23650098, 24-3541]
FX This work was supported by JSPS KAKENHI (grant Numbers 23650098,
24-3541).
CR AN KN, 1979, J BIOMECH, V12, P775, DOI 10.1016/0021-9290(79)90163-5
Birglen L, 2008, SPRINGER TRAC ADV RO, V40, P1
BROOK N, 1995, MED ENG PHYS, V17, P54, DOI 10.1016/1350-4533(95)90378-O
BUCHNER HJ, 1988, J BIOMECH, V21, P459, DOI 10.1016/0021-9290(88)90238-2
Butterfass J, 2001, IEEE INT CONF ROBOT, P109, DOI 10.1109/ROBOT.2001.932538
Ceccarelli M, 2006, ROBOTICA, V24, P183, DOI 10.1017/S0263574705002018
COONEY WP, 1977, J BONE JOINT SURG AM, V59, P27
DARLING WG, 1994, J BIOMECH, V27, P479, DOI 10.1016/0021-9290(94)90023-X
de Visser H, 2000, J REHABIL RES DEV, V37, P261
Dennerlein JT, 1998, J BIOMECH, V31, P295, DOI 10.1016/S0021-9290(98)00006-2
GARCIAELIAS M, 1991, J HAND SURG-AM, V16A, P1130, DOI 10.1016/S0363-
5023(10)80079-6
Gialias N, 2004, IEEE INT CONF ROBOT, P3380, DOI 10.1109/ROBOT.2004.1308776
HAHN P, 1995, J HAND SURG-BRIT EUR, V20B, P696, DOI 10.1016/S0266-7681(05)80139-
1
HARRIS C, 1972, J BONE JOINT SURG AM, VA 54, P713, DOI 10.2106/00004623-
197254040-00003
HIROSE S, 1978, MECH MACH THEORY, V13, P351, DOI 10.1016/0094-114X(78)90059-9
Hyodo K, 1993, P ICAM 93 JASM, P278
Inouye J, 2013, HUMAN HAND SOURCE IN
Jacobsen SC, 1986, P IEEE INT C ROB AUT, V3, P1520
Kamper DG, 2003, J NEUROPHYSIOL, V90, P3702, DOI 10.1152/00546.2003
Kobayashi H, 1998, INT J ROBOT RES, V17, P561, DOI 10.1177/027836499801700507
Kuo PL, 2006, J BIOMECH, V39, P2934, DOI 10.1016/j.jbiomech.2005.10.028
Kutch JJ, 2012, PLOS COMPUT BIOL, V8, DOI 10.1371/journal.pcbi.1002434
Kutch JJ, 2011, J BIOMECH, V44, P1264, DOI 10.1016/j.jbiomech.2011.02.014
LEIJNSE JNAL, 1992, J BIOMECH, V25, P1253, DOI 10.1016/0021-9290(92)90281-5
LEIJNSE JNAL, 1995, J BIOMECH, V28, P237, DOI 10.1016/0021-9290(94)00070-K
Ozawa R., 2009, P IEEE INT C ROB AUT, P1522
Pollard N. S., 2002, P IEEE INT C ROB AUT, V4, P3755
RANNEY D, 1988, ANAT REC, V222, P110, DOI 10.1002/ar.1092220116
Salisbury J., 1982, INT J ROBOT RES, V1, P4, DOI [10.1177/027836498200100102,
DOI 10.1177/027836498200100102]
Sancho-Bru JL, 2001, J BIOMECH, V34, P1491, DOI 10.1016/S0021-9290(01)00106-3
Sawada D, 2012, IEEE INT CONF ROBOT, P1501, DOI 10.1109/ICRA.2012.6224809
SPOOR CW, 1976, J BIOMECH, V9, P561, DOI 10.1016/0021-9290(76)90096-8
SPOOR CW, 1983, J BIOMECH, V16, P497, DOI 10.1016/0021-9290(83)90064-7
Tsai L-W., 1999, ROBOT ANAL MECH SERI
Valero-Cuevas FJ, 1998, J BIOMECH, V31, P693, DOI 10.1016/S0021-9290(98)00082-7
Valero-Cuevas FJ, 2000, J BIOMECH, V33, P1601, DOI 10.1016/S0021-9290(00)00131-7
Valero-Cuevas FJ, 2007, IEEE T BIO-MED ENG, V54, P1951, DOI
[10.1109/TBME.2007.906494, 10.1109/FBME.2007.906494]
Valero-Cuevas FJ, 2007, IEEE T BIO-MED ENG, V54, P1161, DOI
10.1109/TBME.2006.889200
Valero-Cuevas FJ, 2009, ADV EXP MED BIOL, V629, P619, DOI 10.1007/978-0-387-
77064-2_33
Vande Weghe M., 2004, P IEEE INT C ROB AUT, P3375
Wilkinson DD, 2003, IEEE INT CONF ROBOT, P238
Zollo L, 2007, IEEE-ASME T MECH, V12, P418, DOI 10.1109/TMECH.2007.901936
NR 42
TC 10
Z9 10
U1 0
U2 27
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD APR
PY 2014
VL 33
IS 5
SI SI
BP 677
EP 693
DI 10.1177/0278364913518357
PG 17
WC Robotics
SC Robotics
GA AG6SL
UT WOS:000335548600002
DA 2018-01-22
ER

PT J
AU Cooney, M
Kanda, T
Alissandrakis, A
Ishiguro, H
AF Cooney, Martin
Kanda, Takayuki
Alissandrakis, Aris
Ishiguro, Hiroshi
TI Designing Enjoyable Motion-Based Play Interactions with a Small Humanoid
Robot
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Interaction design for enjoyment; Playful human-robot interaction;
Full-body gesture recognition; Inertial sensing; Small humanoid robot
ID TECHNOLOGY ACCEPTANCE MODEL; COMMUNICATION; ENTERTAINMENT; ENJOYMENT
AB Robots designed to co-exist with humans in domestic and public environments
should be capable of interacting with people in an enjoyable fashion in order to be
socially accepted. In this research, we seek to set up a small humanoid robot with
the capability to provide enjoyment to people who pick up the robot and play with
it by hugging, shaking and moving the robot in various ways. Inertial sensors
inside a robot can capture how its body is moved when people perform such "full-
body gestures". Unclear is how a robot can recognize what people do during play,
and how such knowledge can be used to provide enjoyment. People's behavior is
complex, and naive designs for a robot's behavior based only on intuitive knowledge
from previous designs may lead to failed interactions. To solve these problems, we
model people's behavior using typical full-body gestures observed in free
interaction trials, and devise an interaction design based on avoiding typical
failures observed in play sessions with a naive version of our robot. The
interaction design is completed by investigating how a robot can provide "reward"
and itself suggest ways to play during an interaction. We then verify
experimentally that our design can be used to provide enjoyment during a playful
interaction. By describing the process of how a small humanoid robot can be
designed to provide enjoyment, we seek to move one step closer to realizing
companion robots which can be successfully integrated into human society.
C1 [Cooney, Martin; Kanda, Takayuki; Ishiguro, Hiroshi] Adv Telecommun Res Inst Int
IRC HIL, Keihanna Sci City, Kyoto, Japan.
[Alissandrakis, Aris] Linnaeus Univ, Ctr Learning & Knowledge Technol Org, S-
35195 Vaxjo, Sweden.
RP Cooney, M (reprint author), Adv Telecommun Res Inst Int IRC HIL, 2-2-2
Hikaridai, Keihanna Sci City, Kyoto, Japan.
EM mcooney@atr.jp; kanda@atr.jp; aris.alissandrakis@lnu.se; ishiguro@atr.jp
RI Kanda, Takayuki/I-5843-2016; Alissandrakis, Aris/F-2265-2015
OI Kanda, Takayuki/0000-0002-9546-5825; Alissandrakis,
Aris/0000-0003-4162-6475
FU Ministry of Internal Affairs and Communications of Japan; JST, CREST
FX We would like to thank everyone who helped with this project. This
research was supported by the Ministry of Internal Affairs and
Communications of Japan and JST, CREST.
CR BARNETT K, 1972, NURS RES, V21, P102
Brooke J, 1996, USABILITY EVALUATION
Brooks AG, 2007, AUTON ROBOT, V22, P55, DOI 10.1007/s10514-006-9005-8
Chang C, 2003, PRACTICAL GUIDE SUPP, V106
Chang C-C, 2001, LIBSVM LIB SUPPORT V
Cooney Martin D., 2011, 2011 11th IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2011), P112, DOI 10.1109/Humanoids.2011.6100847
Cooney MD, 2010, IEEE RSJ INT C INT R, P2276, DOI DOI 10.1109/IROS.2010.5650081
Cramer H, 2010, FUN SER WORKSH CSCW
Csikszentmihalyi M, 1993, NEBRASKA S MOTIVATIO
Csikszentmihalyi M., 1975, BOREDOM ANXIETY EXPE
Dautenhahn Kerstin, 2009, Applied Bionics and Biomechanics, V6, P369, DOI
10.1080/11762320903123567
Ellis M. J., 1973, WHY PEOPLE PLAY
Festinger L., 1957, THEORY COGNITIVE DIS
Fujita M, 2004, P IEEE, V92, P1804, DOI 10.1109/JPROC.2004.835364
Garfinkel H., 1967, STUDIES ETHNOMETHODO
GELDARD FA, 1960, SCIENCE, V131, P1583, DOI 10.1126/science.131.3413.1583
Goetz J, 2003, 2003 IEEE INT WORKSH, P55, DOI [10.1109/ROMAN.2003.1251796, DOI
10.1109/ROMAN.2003.1251796]
Hassenzahl M., 2003, MENSCH COMPUTER 2003, P187, DOI DOI 10.1007/978-3-322-
80058-9_19
Hayashi K, 2010, 2010 IEEE RO-MAN, P348, DOI 10.1109/ROMAN.2010.5598661
Heerink M., 2008, P 3 ACM IEEE INT C H, P113, DOI DOI 10.1145/1349822.1349838
Heider F, 1944, AM J PSYCHOL, V57, P243, DOI 10.2307/1416950
Huizinga J., 1950, HOMOLUDENS STUDY PLA
Kahn P. H., 2008, P 3 ACM IEEE INT C H, P97, DOI DOI 10.1145/1349822.1349836
Knight H, 2009, HRI WORKSH ACC ROB S
Kozima H, 2009, INT J SOC ROBOT, V1, P3, DOI 10.1007/s12369-008-0009-8
Kraus MW, 2010, EMOTION, V10, P745, DOI 10.1037/a0019382
Kulic D, 2012, INT J ROBOT RES, V31, P330, DOI 10.1177/0278364911426178
Lewis C., 1993, TASK CTR USER INTERF
Michalowski MP, 2006, 2006 IEEE INT WORKSH, P587, DOI
[10.1109/ROMAN.2006.314453, DOI 10.1109/ROMAN.2006.314453]
Movellan J. R., 2005, P 4 INT C DEV LEARN
Nakatsu R, 2005, LECT NOTES COMPUT SC, V3711, P1
NIELSEN J., 1990, P ACM CHI 90 C SEATT, P249, DOI DOI 10.1145/97243.97281
Nomura T, 2006, INTERACT STUD, V7, P437, DOI 10.1075/is.7.3.14nom
Nehaniv R., 2008, P 3 ACM IEEE INT C H, P17, DOI DOI 10.1145/1349822.1349826
Salter T, 2005, 2005 IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN INTERACTIVE
COMMUNICATION (RO-MAN), P178
Salter T., 2007, P 2 HUM ROB INT WASH, P105, DOI DOI 10.1145/1228716.1228731
Satake S., 2009, P 4 ACM IEEE INT C H, P109, DOI DOI 10.1145/1514095.1514117
Shneiderman B., 2004, DESIGNING USER INTER
Stiehl WD, 2005, 2005 IEEE International Workshop on Robot and Human Interactive
Communication (RO-MAN), P408
Takayama L., 2008, P 3 ACM IEEE INT C H, P25, DOI DOI 10.1145/1349822.1349827
Tanaka F., 2006, P 1 ACM SIGCHI SIGAR, P3, DOI DOI 10.1145/1121241.1121245
Tanaka F, 2007, P NATL ACAD SCI USA, V104, P17954, DOI 10.1073/pnas.0707769104
Turkle S, 2004, 2004 IEEE RAS INT C
Turkle S., 2006, AAAI TECHNICAL REPOR
Vapnik V, 1997, ADV NEUR IN, V9, P281
Venkatesh V, 2000, INFORM SYST RES, V11, P342, DOI 10.1287/isre.11.4.342.11872
Viola P., 2001, IEEE ICCV WORKSH STA
Vorderer P, 2004, COMMUN THEOR, V14, P388, DOI 10.1111/j.1468-
2885.2004.tb00321.x
Wada K, 2007, IEEE T ROBOT, V23, P972, DOI 10.1109/TRO.2007.906261
WATSON D, 1988, J PERS SOC PSYCHOL, V54, P1063, DOI 10.1037/0022-3514.54.6.1063
Williams LE, 2008, SCIENCE, V322, P606, DOI 10.1126/science.1162548
Yi MY, 2003, INT J HUM-COMPUT ST, V59, P431, DOI [10.1016/S1071-5819(03)00114-9,
10.1016/S0171-5819(03)00114-9]
Yohanan S, 2012, INT J SOC ROBOT, V4, P163, DOI 10.1007/s12369-011-0126-7
NR 53
TC 4
Z9 4
U1 1
U2 12
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD APR
PY 2014
VL 6
IS 2
BP 173
EP 193
DI 10.1007/s12369-013-0212-0
PG 21
WC Robotics
SC Robotics
GA AE1WB
UT WOS:000333760900002
DA 2018-01-22
ER

PT J
AU Berenz, V
Suzuki, K
AF Berenz, V.
Suzuki, K.
TI Targets-Drives-Means: A declarative approach to dynamic behavior
specification with higher usability
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE End-user programming; Usability; Application layer; Behavior
specification; Dynamic behaviors; Commercial humanoid robots; TDM
ID ARCHITECTURE; ROBOT; AGENTS
AB Small humanoid robots are becoming more affordable and are now used in fields
such as human-robot interaction, ethics, psychology, or education. For non-
roboticists, the standard paradigm for robot visual programming is based on the
selection of behavioral blocks, followed by their connection using communication
links. These programs provide efficient user support during the development of
complex series of movements and sequential behaviors. However, implementing dynamic
control remains challenging because the data flow between components to enforce
control loops, object permanence, the memories of object positions, odometry, and
finite state machines has to be organized by the users. In this study, we develop a
new programming paradigm, Targets-Drives-Means, which is suitable for the
specification of dynamic robotic tasks. In this proposed approach, programming is
based on the declarative association of reusable dynamic components. A central
memory organizes the information flows automatically and issues related to dynamic
control are solved by processes that remain hidden from the end users. The proposed
approach has advantages during the implementation of dynamic behaviors, but it
requires that users stop conceiving robotic tasks as the execution of a sequence of
actions. Instead, users are required to organize their programs as collections of
behaviors that run in parallel and compete for activation. This might be considered
non-intuitive but we also report the positive outcomes of a usability experiment,
which evaluated the accessibility of the proposed approach. (C) 2014 Elsevier B.V.
All rights reserved.
C1 [Berenz, V.] RIKEN, Brain Sci Inst, Toyoto Collaborat Ctr, Wako, Saitama, Japan.
[Berenz, V.] Univ Tsukuba, Artificial Intelligence Lab, Tsukuba, Ibaraki 305,
Japan.
[Suzuki, K.] Univ Tsukuba, Fac Engn Informat & Syst, Dept Intelligent Interact
Technol, Tsukuba, Ibaraki 305, Japan.
RP Berenz, V (reprint author), RIKEN, Brain Sci Inst, Wako, Saitama, Japan.
EM vincent@brain.riken.jp
CR Aldebaran, 2013, CHOR CORP ALD ROB SO
Anderson M, 2010, SCI AM, V303, P72, DOI 10.1038/scientificamerican1010-72
Arkin Ronald C., 1994, INT S ROB MAN MAUI H, P5
Baillie J.C., 2008, P 3 INT WORKSH SOFTW
Baldassarre G., 2013, COMPUTATIONAL ROBOTI
Bangor A, 2009, J USABILITY STUD, V4, P114
Barakova EI, 2013, ROBOT AUTON SYST, V61, P704, DOI 10.1016/j.robot.2012.08.001
Berenz V., 2011, P 11 IEEE RAS INT C, P179
Bonasso RP, 1997, J EXP THEOR ARTIF IN, V9, P237, DOI 10.1080/095281397147103
Brooke J, 1996, USABILITY EVALUATION
BROOKS R, 1977, INT J MAN MACH STUD, V9, P737, DOI 10.1016/S0020-7373(77)80039-4
Brooks Rodney A., 1990, ARTIF INTELL, P2
Fahland D., 2009, LECT NOTES BUSINESS
Fahland D, 2010, LECT NOTES BUS INF P, V43, P477
Faison T., 1998, P C PATT LANG PROGR, V2, P1
FIRBY RJ, 1989, THESIS YALE U
Ford N., 2008, PRODUCTIVE PROGRAMME
GILMORE DJ, 1984, INT J MAN MACH STUD, V21, P31, DOI 10.1016/S0020-
7373(84)80037-1
Green TRG, 1996, J VISUAL LANG COMPUT, V7, P131, DOI 10.1006/jvlc.1996.0009
GREEN TRG, 1977, J OCCUP PSYCHOL, V50, P93, DOI 10.1111/j.2044-
8325.1977.tb00363.x
Grossman T, 2009, CHI2009: PROCEEDINGS OF THE 27TH ANNUAL CHI CONFERENCE ON
HUMAN FACTORS IN COMPUTING SYSTEMS, VOLS 1-4, P649
Gruebler A., 2011, P 11 IEEE RAS INT C, P466
Gruebler A, 2012, ADV ROBOTICS, V26, P1143, DOI 10.1080/01691864.2012.686349
Hirokawa M, 2010, LECT NOTES ARTIF INT, V6276, P148, DOI 10.1007/978-3-642-
15387-7_19
Hoshino Y., 2004, IEEE INT C ROB AUT, V4, P4165
Hsiao KY, 2008, CONNECT SCI, V20, P253, DOI 10.1080/09540090802445113
Hurdus JG, 2008, IEEE INT C MULT FUS, P503, DOI 10.1109/MFI.2008.4648045
Jaeger H., 1998, ARTIFICIAL LIFE ROBO, V2, P108, DOI [DOI 10.1007/BF02471165,
10.1007/BF02471165]
Kim G, 2004, IEEE INT CONF ROBOT, P4005
Limbu DK, 2011, IEEE INT C ROB BIOM, P1736
Lindstrom M., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P3278, DOI 10.1109/ROBOT.2000.845168
Microsoft, 2013, MICR ROB DEV STUD 4
Nielsen Jakob, 1997, COMPUTER SCI ENG HDB, P1440
Quigley M., 2009, ICRA WORKSH OP SOURC
Feil Seifer D., 2008, ROB HUM INT COMM ROM, P328
Simmons R., 1998, P C INT ROB SYST IRO
Tanaka F, 2011, ACMIEEE INT CONF HUM, P265, DOI 10.1145/1957656.1957763
Vernon D, 2007, IEEE T EVOLUT COMPUT, V11, P151, DOI 10.1109/TEVC.2006.890274
Villano M, 2011, ACMIEEE INT CONF HUM, P279, DOI 10.1145/1957656.1957770
NR 39
TC 2
Z9 2
U1 0
U2 6
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD APR
PY 2014
VL 62
IS 4
BP 545
EP 555
DI 10.1016/j.robot.2013.12.010
PG 11
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA AE2FV
UT WOS:000333789600013
DA 2018-01-22
ER

PT J
AU Jamone, L
Brandao, M
Natale, L
Hashimoto, K
Sandini, G
Takanishi, A
AF Jamone, Lorenzo
Brandao, Martim
Natale, Lorenzo
Hashimoto, Kenji
Sandini, Giulio
Takanishi, Atsuo
TI Autonomous online generation of a motor representation of the workspace
for intelligent whole-body reaching
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Kinematic workspace; Online sensorimotor learning; Humanoid robots;
Whole-body reaching; Bio-inspired robotics
ID PARIETAL CORTEX; VISUAL SPACE; REGRESSION; ALGORITHM; NEURONS; ROBOTS
AB We describe a learning strategy that allows a humanoid robot to autonomously
build a representation of its workspace: we call this representation Reachable
Space Map. Interestingly, the robot can use this map to: (i) estimate the
Reachability of a visually detected object (i.e. judge whether the object can be
reached for, and how well, according to some performance metric) and (ii) modify
its body posture or its position with respect to the object to achieve better
reaching. The robot learns this map incrementally during the execution of goal-
directed reaching movements; reaching control employs kinematic models that are
updated online as well. Our solution is innovative with respect to previous works
in three aspects: the robot workspace is described using a gaze-centered motor
representation, the map is built incrementally during the execution of goal-
directed actions, learning is autonomous and online. We implement our strategy on
the 48-DOFs humanoid robot Kobian and we show how the Reachable Space Map can
support intelligent reaching behavior with the whole-body (i.e. head, eyes, arm,
waist, legs). (C) 2014 Elsevier B.V. All rights reserved.
C1 [Jamone, Lorenzo; Brandao, Martim; Hashimoto, Kenji; Takanishi, Atsuo] Waseda
Univ, Fac Sci & Engn, Tokyo, Japan.
[Natale, Lorenzo] Ist Italiano Tecnol, iCub Facil, Genoa, Italy.
[Sandini, Giulio] Ist Italiano Tecnol, Dept Robot Brain & Cognit Sci, Genoa,
Italy.
[Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst, Tokyo, Japan.
RP Jamone, L (reprint author), Inst Super Tecn Torre Norte, Av Rovisco Pais 1, P-
1049001 Lisbon, Portugal.
EM lorenzojamone@gmail.com; martimbrandao@gmail.com; lorenzo.natale@iit.it;
contact@takanishi.mech.waseda.ac.jp; giulio.sandini@iit.it;
contact@takanishi.mech.waseda.ac.jp
RI Jamone, Lorenzo/Q-8633-2016; Brandao, Martim/I-5522-2015; Hashimoto,
Kenji/M-6442-2017
OI Jamone, Lorenzo/0000-0002-1521-6168; Brandao,
Martim/0000-0002-2003-0675; Hashimoto, Kenji/0000-0003-2300-6766;
Natale, Lorenzo/0000-0002-8777-5233
CR ANDERSEN RA, 1985, SCIENCE, V230, P456, DOI 10.1126/science.4048942
Batista AP, 1999, SCIENCE, V285, P257, DOI 10.1126/science.285.5425.257
Blangero A., 2005, ADV COGN PSYCHOL, V1, P9, DOI DOI 10.2478/V10053-008-0039-7
Bosco A, 2010, J NEUROSCI, V30, P14773, DOI 10.1523/JNEUROSCI.2313-10.2010
Brandao M., 2013, INT C HUM ROB
BROTCHIE PR, 1995, NATURE, V375, P232, DOI 10.1038/375232a0
Chinellato E., 2010, T AUTON MENTAL DEV, V4, P43
DUHAMEL JR, 1992, SCIENCE, V255, P90
Nguyen-Tuong D, 2011, COGN PROCESS, V12, P319, DOI 10.1007/s10339-011-0404-1
Endo N., 2010, INT S ROB HUM INT CO
Endo Nobutsuna, 2011, J ROBOTICS MECHATRON, V23, P969
Fiehler K, 2011, VISION RES, V51, P890, DOI 10.1016/j.visres.2010.12.015
Flanders M, 1999, EXP BRAIN RES, V126, P19, DOI 10.1007/s002210050713
Guan Y, 2008, INT J ROBOT RES, V27, P935, DOI 10.1177/0278364908095142
GUPTA KC, 1986, INT J ROBOT RES, V5, P5, DOI 10.1177/027836498600500202
Henriques DYP, 1998, J NEUROSCI, V18, P1583
Hulse M, 2010, IEEE T AUTON MENT DE, V2, P355, DOI 10.1109/TAMD.2010.2081667
Jamone L., 2012, INT C BIOM ROB BIOM
Jamone L., 2011, INT C ROB BIOM IEEE
Jamone L., 2012, INT C INT ROB SYST I
Jamone L, 2012, INT J HUM ROBOT, V9, DOI 10.1142/S021984361250017X
Ligeois A., 1977, T SYST MAN CYBERN, V7, P868
Marjanovic M., 1996, INT C SIM AD BEH MA
McIntyre J., 1997, J NEUROPHYSIOL, V62, P582
Medendorp WP, 2003, J NEUROSCI, V23, P6209
Metta G., 2006, International Journal of Advanced Robotic Systems, V3, P43
Metta G, 1999, NEURAL NETWORKS, V12, P1413, DOI 10.1016/S0893-6080(99)00070-2
Miwa H., 2002, INT C INT ROB SYST I
Natale L., 2007, INT C DEV LEARN
Nguyen-Tuong D, 2009, ADV ROBOTICS, V23, P2015, DOI
10.1163/016918609X12529286896877
Ogura Y., 2006, INT C ROB AUT IEE RA
Parmiggiani A, 2012, INT J HUM ROBOT, V9, DOI 10.1142/S0219843612500272
Pattacini U., 2010, INT C INT ROB SYST
Rougeaux S., 1998, INT WORKSH HUM HUM F
Sigaud O, 2011, ROBOT AUTON SYST, V59, P1115, DOI 10.1016/j.robot.2011.07.006
SOECHTING JF, 1989, J NEUROPHYSIOL, V62, P582
Taiana M, 2010, ROBOT AUTON SYST, V58, P784, DOI 10.1016/j.robot.2010.02.010
Vahrenkamp N, 2012, INT J HUM ROBOT, V9, DOI 10.1142/S0219843612500351
Vijayakumar S., 2000, INT C MACH LEARN ICM
Wachter A, 2006, MATH PROGRAM, V106, P25, DOI 10.1007/s10107-004-0559-y
Zacharias F., 2007, INT C INT ROB SYST I
Zacharias F., 2009, INT C ADV ROB
Zecca M., 2002, INT C INT ROB SYST I
NR 43
TC 6
Z9 6
U1 2
U2 15
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD APR
PY 2014
VL 62
IS 4
BP 556
EP 567
DI 10.1016/j.robot.2013.12.011
PG 12
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA AE2FV
UT WOS:000333789600014
DA 2018-01-22
ER

PT J
AU Polini, A
Prodanov, L
Bhise, NS
Manoharan, V
Dokmeci, MR
Khademhosseini, A
AF Polini, Alessandro
Prodanov, Ljupcho
Bhise, Nupura S.
Manoharan, Vijayan
Dokmeci, Mehmet R.
Khademhosseini, Ali
TI Organs-on-a-chip: a new tool for drug discovery
SO EXPERT OPINION ON DRUG DISCOVERY
LA English
DT Review
DE biomimicry; body-on-a-chip; drug discovery platforms; in vitro models;
in vitro reactors; metabolic biomarkers; microenvironment;
microfluidics; organs-on-a-chip; sensors; tissue engineering
ID ENGINEERED CARDIAC TISSUES; PLURIPOTENT STEM-CELLS; TOTAL BIOASSAY
SYSTEM; IN-VITRO MODEL; MICROFLUIDIC PLATFORM; LIVER-CELLS;
INTESTINAL-ABSORPTION; HEPATIC-METABOLISM; LUNG-CANCER; CULTURE
AB Introduction: The development of emerging in vitro tissue culture platforms can
be useful for predicting human response to new compounds, which has been
traditionally challenging in the field of drug discovery. Recently, several in
vitro tissue-like microsystems, also known as 'organs-on-a-chip', have emerged to
provide new tools for better evaluating the effects of various chemicals on human
tissue.
Areas covered: The aim of this article is to provide an overview of the organs-
on-a-chip systems that have been recently developed. First, the authors introduce
single-organ platforms, focusing on the most studied organs such as liver, heart,
blood vessels and lung. Later, the authors briefly describe tumor-on-a-chip
platforms and highlight their application for testing anti-cancer drugs. Finally,
the article reports a few examples of other organs integrated in microfluidic chips
along with preliminary multiple-organs-on-a-chip examples. The article also
highlights key fabrication points as well as the main application areas of these
devices.
Expert opinion: This field is still at an early stage and major challenges need
to be addressed prior to the embracement of these technologies by the
pharmaceutical industry. To produce predictive drug screening platforms, several
organs have to be integrated into a single microfluidic system representative of a
humanoid. The routine production of metabolic biomarkers of the organ constructs,
as well as their physical environment, have to be monitored prior to and during the
delivery of compounds of interest to be able to translate the findings into useful
discoveries.
C1 [Polini, Alessandro; Prodanov, Ljupcho; Bhise, Nupura S.; Manoharan, Vijayan;
Dokmeci, Mehmet R.; Khademhosseini, Ali] Harvard Univ, Sch Med, Brigham & Womens
Hosp, Div Biomed Engn,Dept Med, Cambridge, MA 02139 USA.
[Polini, Alessandro; Prodanov, Ljupcho; Bhise, Nupura S.; Manoharan, Vijayan;
Dokmeci, Mehmet R.; Khademhosseini, Ali] MIT, Harvard Mit Div Hlth Sci & Technol,
Cambridge, MA 02139 USA.
[Khademhosseini, Ali] Harvard Univ, Wyss Inst Biol Inspired Engn, Boston, MA
02115 USA.
[Khademhosseini, Ali] Tohoku Univ, WPI AIMR, Sendai, Miyagi 9808577, Japan.
RP Khademhosseini, A (reprint author), Harvard Univ, Wyss Inst Biol Inspired Engn,
Boston, MA 02115 USA.
EM alik@rics.bwh.harvard.edu
RI Bhise, Nupura/P-1718-2014; Manoharan, Vijayan/B-9474-2016; Polini,
Alessandro/A-2077-2012
OI Manoharan, Vijayan/0000-0002-0170-3125; Polini,
Alessandro/0000-0002-3188-983X; Khademhosseini, Ali/0000-0001-6322-8852
FU Defense Threat Reduction Agency (DTRA) [N66001-13-C-2027]; Office of
Naval Research Young National Investigator Award; National Institutes of
Health [HL092836, DE019024, EB012597, AR057837, DE021468, HL099073,
EB008392]; Presidential Early Career Award for Scientists and Engineers
(PECASE)
FX The authors gratefully acknowledge funding by the Defense Threat
Reduction Agency (DTRA; X. C. E. L Program N66001-13-C-2027). The
content is solely the responsibility of the authors and does not
necessarily represent the official views of the awarding agency. The
authors acknowledge also funding from the Office of Naval Research Young
National Investigator Award, the National Institutes of Health
(HL092836, DE019024, EB012597, AR057837, DE021468, HL099073, EB008392),
and the Presidential Early Career Award for Scientists and Engineers
(PECASE).
CR Agarwal A, 2013, LAB CHIP, V13, P3599, DOI 10.1039/c3lc50350j
Annabi N, 2013, LAB CHIP, V13, P3569, DOI 10.1039/c3lc50252j
Beebe DJ, 2013, LAB CHIP, V13, P3447, DOI 10.1039/c3lc90080k
Bellin M, 2012, NAT REV MOL CELL BIO, V13, P713, DOI 10.1038/nrm3448
Benetton S, 2003, ANAL CHEM, V75, P6430, DOI 10.1021/ac030249+
Berthier E, 2012, LAB CHIP, V12, P1224, DOI 10.1039/c2lc20982a
Bhatia SN, 1999, FASEB J, V13, P1883
Bi H, 2006, ANAL CHEM, V78, P3399, DOI 10.1021/ac0522963
Booth R, 2012, LAB CHIP, V12, P1784, DOI 10.1039/c2lc40094d
Cheah LT, 2012, BIOTECHNOL BIOENG, V109, P1827, DOI 10.1002/bit.24426
Chen L, 2012, CURR PHARM DESIGN, V18, P1217
Chu XY, 2013, EXPERT OPIN DRUG MET, V9, P237, DOI 10.1517/17425255.2013.741589
Chung S, 2009, LAB CHIP, V9, P269, DOI 10.1039/b807585a
Cramer AO, 2013, CURR GENE THER, V13, P139
Cross VL, 2010, BIOMATERIALS, V31, P8596, DOI 10.1016/j.biomaterials.2010.07.072
Depre C, 2007, HEART FAIL REV, V12, P307, DOI 10.1007/s10741-007-9040-3
Douville NJ, 2011, LAB CHIP, V11, P609, DOI 10.1039/c0lc00251h
Elliott NT, 2012, BIOTECHNOL BIOENG, V109, P1326, DOI 10.1002/bit.24397
Esch MB, 2011, ANNU REV BIOMED ENG, V13, P55, DOI [10.1146/annurev-bioeng-
071910-124629, 10.1146/annurev-biocng-071910-124629]
Feng XJ, 2009, ANAL CHIM ACTA, V650, P83, DOI 10.1016/j.aca.2009.04.051
Frohlich EM, 2013, LAB CHIP, V13, P2311, DOI 10.1039/c3lc50199j
GEBHARDT R, 1979, EXP CELL RES, V124, P349, DOI 10.1016/0014-4827(79)90210-6
Grafton MMG, 2011, INTEGR BIOL-UK, V3, P451, DOI [10.1039/c0ib00132e,
10.1039/c0ib00132c]
Griep LM, 2013, BIOMED MICRODEVICES, V15, P145, DOI 10.1007/s10544-012-9699-7
Grosberg A, 2012, J PHARMACOL TOX MET, V65, P126, DOI
10.1016/j.vascn.2012.04.001
Grosberg A, 2011, LAB CHIP, V11, P4165, DOI 10.1039/c1lc20557a
Grosberg A, 2011, PLOS COMPUT BIOL, V7, DOI 10.1371/journal.pcbi.1001088
Gunther A, 2010, LAB CHIP, V10, P2341, DOI 10.1039/c004675b
Guido RVC, 2011, COMB CHEM HIGH T SCR, V14, P830
Guillouzo A, 2008, EXPERT OPIN DRUG MET, V4, P1279, DOI
[10.1517/17425250802414436, 10.1517/17425255.4.10.1279 ]
Guzzardi MA, 2011, TISSUE ENG PT A, V17, P1635, DOI [10.1089/ten.tea.2010.0541,
10.1089/ten.TEA.2010.0541]
Harper AR, 2012, NAT BIOTECHNOL, V30, P1117, DOI 10.1038/nbt.2424
Hasenfuss G, 1998, CARDIOVASC RES, V39, P60, DOI 10.1016/S0008-6363(98)00110-2
Hecker L, 2007, REGEN MED, V2, P125, DOI 10.2217/174607501.2.2.125
Helmke BP, 2005, PHYSIOLOGY, V20, P43, DOI 10.1152/physiol.00040.2004
Higuchi A, 2013, CHEM REV, V113, P3297, DOI 10.1021/cr300426x
Ho CT, 2006, LAB CHIP, V6, P724, DOI 10.1039/b60203d
Hoffmann D, 2010, TOXICOL SCI, V116, P8, DOI 10.1093/toxsci/kfq029
Hughes JP, 2011, BRIT J PHARMACOL, V162, P1239, DOI 10.1111/j.1476-
5381.2010.01127.x
Huh D., 2012, SCI TRANSL MED, V4
Huh D, 2007, P NATL ACAD SCI USA, V104, P18886, DOI 10.1073/pnas.0610868104
Huh D, 2010, SCIENCE, V328, P1662, DOI 10.1126/science.1188302
Imura Y, 2012, ANAL SCI, V28, P197, DOI 10.2116/analsci.28.197
Imura Y, 2010, ANAL CHEM, V82, P9983, DOI 10.1021/ac100806x
Iori E, 2012, PLOS ONE, V7, DOI 10.1371/journal.pone.0034704
Iyer RK, 2011, CURR OPIN BIOTECH, V22, P706, DOI 10.1016/j.copbio.2011.04.004
Jang KJ, 2013, INTEGR BIOL-UK, V5, P1119, DOI 10.1039/c3ib40049b
Jang KJ, 2010, LAB CHIP, V10, P36, DOI 10.1039/b907515a
Gomez-Lechon MJ, 2008, EXPERT OPIN DRUG MET, V4, P837, DOI
[10.1517/17425255.4.7.837 , 10.1517/17425250802159015]
Khetani SR, 2008, NAT BIOTECHNOL, V26, P120, DOI 10.1038/nbt1361
Kim C, 2012, LAB CHIP, V12, P4135, DOI [10.1039/c21c40570a, 10.1039/c2lc40570a]
Kim HJ, 2013, INTEGR BIOL-UK, V5, P1130, DOI 10.1039/c3ib40126j
Kim HJ, 2012, LAB CHIP, V12, P2165, DOI 10.1039/c2lc40074j
Kim S, 2013, LAB CHIP, V13, P1489, DOI 10.1039/c3lc41320a
Kirby BJ, 2012, PLOS ONE, V7, DOI 10.1371/journal.pone.0035976
Klauke N, 2007, LAB CHIP, V7, P731, DOI 10.1039/b706175g
Kola I, 2004, NAT REV DRUG DISCOV, V3, P711, DOI 10.1038/nrd1470
Kostadinova R, 2013, TOXICOL APPL PHARM, V268, P1, DOI
10.1016/j.taap.2013.01.012
L'Heureux N, 1998, FASEB J, V12, P47
L'Heureux N, 2006, NAT MED, V12, P361, DOI 10.1038/nm1364
L'Heureux N, 2001, FASEB J, V15, P515, DOI 10.1096/fj.00-0283com
Lee PJ, 2007, BIOTECHNOL BIOENG, V97, P1340, DOI 10.1002/bit.21360
Lee SA, 2013, LAB CHIP, V13, P3529, DOI 10.1039/c3lc50197c
Li Xiujun James, 2006, V321, P199
Mahler GJ, 2009, BIOTECHNOL BIOENG, V104, P193, DOI 10.1002/bit.22366
Markov DA, 2012, LAB CHIP, V12, P4560, DOI 10.1039/c2lc40304h
Martewicz S, 2012, INTEGR BIOL-UK, V4, P153, DOI [10.1039/c1ib00087j,
10.1039/c1lb00087j]
Mauk MG, 2007, ANN NY ACAD SCI, V1098, P467, DOI 10.1196/annals.1384.025
Moraes C, 2013, INTEGR BIOL-UK, V5, P1149, DOI 10.1039/c3ib40040a
MORIN O, 1986, J CELL PHYSIOL, V129, P103, DOI 10.1002/jcp.1041290115
Moya ML, 2013, TISSUE ENG PART C-ME, V19, P730, DOI [10.1089/ten.tec.2012.0430,
10.1089/ten.TEC.2012.0430]
Nakao Y, 2011, BIOMICROFLUIDICS, V5, DOI 10.1063/1.3580753
Nawy T, 2013, NAT METHODS, V10, P198, DOI 10.1038/nmeth.2395
Niklason LE, 1999, SCIENCE, V284, P489, DOI 10.1126/science.284.5413.489
Noguera R, 2012, HISTOL HISTOPATHOL, V27, P693, DOI 10.14670/HH-27.693
Paguirigan AL, 2009, INTEGR BIOL, V1, P182, DOI 10.1039/b814565b
Palchesko RN, 2012, PLOS ONE, V7, DOI 10.1371/journal.pone.0051499
Parker KK, 2008, CIRC RES, V103, P340, DOI 10.1161/CIRCRESAHA.108.182469
Prabhakarpandian B, 2013, LAB CHIP, V13, P1093, DOI 10.1039/c2lc41208j
Prabhakarpandian B, 2011, MICROVASC RES, V82, P210, DOI
10.1016/j.mvr.2011.06.013
Ramadan Q, 2013, LAB CHIP, V13, P196, DOI 10.1039/c2lc40845g
Ramon-Azcon J, 2013, ADV MATER, V25, P4028, DOI 10.1002/adma.201301300
Resto PJ, 2012, LAB CHIP, V12, P2221, DOI 10.1039/c2lc20858j
Ricotti L, 2012, BIOMED MATER, V7, DOI 10.1088/1748-6041/7/3/035010
ROBERTS DE, 1982, CIRC RES, V50, P342
Saint DA, 2008, BRIT J PHARMACOL, V153, P1133, DOI 10.1038/sj.bjp.0707492
Saito Y, 2012, BIOCHEM BIOPH RES CO, V426, P33, DOI 10.1016/j.bbrc.2012.08.012
Salama G, 2007, J PHYSIOL-LONDON, V578, P43, DOI 10.1113/jphysiol.2006.118745
Selimovic S, 2013, CURR OPIN PHARMACOL, V13, P829, DOI
10.1016/j.coph.2013.06.005
Sivaraman A, 2005, CURR DRUG METAB, V6, P569, DOI 10.2174/138920005774832632
Snippert HJ, 2011, NAT PROTOC, V6, P1221, DOI 10.1038/nprot.2011.365
Spence JR, 2011, NATURE, V470, P105, DOI 10.1038/nature09691
Sung JH, 2013, LAB CHIP, V13, P1201, DOI 10.1039/c3lc41017j
Sung JH, 2011, LAB CHIP, V11, P389, DOI [10.1039/c0lc00273a, 10.1039/c01c00273a]
Sung JH, 2010, LAB CHIP, V10, P446, DOI 10.1039/b917763a
Sung JH, 2009, BIOTECHNOL BIOENG, V104, P516, DOI 10.1002/bit.22413
Sung JH, 2009, LAB CHIP, V9, P1385, DOI 10.1039/b901377f
Takebe T, 2013, NATURE, V499, P481, DOI 10.1038/nature12271
Tam Anthony, 2011, Ther Adv Respir Dis, V5, P255, DOI 10.1177/1753465810396539
Tavana H, 2011, BIOMED MICRODEVICES, V13, P731, DOI 10.1007/s10544-011-9543-5
Toepke MW, 2006, LAB CHIP, V6, P1484, DOI 10.1039/b612140c
Tremblay PL, 2005, J PHARMACOL EXP THER, V315, P510, DOI 10.1124/jpet.105.089524
Unadkat HV, 2011, P NATL ACAD SCI USA, V108, P16565, DOI 10.1073/pnas.1109861108
van de Stolpe A, 2013, LAB CHIP, V13, P3449, DOI 10.1039/c3lc50248a
van Midwoud PM, 2011, INTEGR BIOL-UK, V3, P509, DOI 10.1039/c0ib00119h
van Midwoud PM, 2010, LAB CHIP, V10, P2778, DOI [10.1039/c0lc00043d,
10.1039/c01c00043d]
Vendelin M, 2002, AM J PHYSIOL-HEART C, V283, pH1072, DOI
10.1152/ajpheart.00874.2001
Wagner I, 2013, LAB CHIP, V13, P3538, DOI 10.1039/c3lc50234a
Weltin A, 2014, LAB CHIP, V14, P138, DOI 10.1039/c3lc50759a
Wikswo JP, 2013, LAB CHIP, V13, P3496, DOI 10.1039/c3lc50243k
Wikswo JP, 2013, IEEE T BIO-MED ENG, V60, P682, DOI 10.1109/TBME.2013.2244891
Wlodkowic D, 2010, CURR OPIN CHEM BIOL, V14, P556, DOI
10.1016/j.cbpa.2010.08.016
Wu DP, 2006, LAB CHIP, V6, P942, DOI 10.1039/b600765a
Xu ZY, 2013, BIOMATERIALS, V34, P4109, DOI 10.1016/j.biomaterials.2013.02.045
Yang CG, 2011, LAB CHIP, V11, P3305, DOI 10.1039/c1lc20123a
Yang Y, 2013, BIOTECHNOL BIOENG, V110, P958, DOI 10.1002/bit.24752
Young EWK, 2013, INTEGR BIOL-UK, V5, P1096, DOI 10.1039/c3ib40076j
Zervantonakis IK, 2011, BIOMICROFLUIDICS, V5, DOI 10.1063/1.3553237
Zguris JC, 2005, BIOMED MICRODEVICES, V7, P117, DOI 10.1007/s10544-005-1589-9
Zhang C, 2009, LAB CHIP, V9, P3185, DOI 10.1039/b915147h
Zhang LC, 2010, ELECTROPHORESIS, V31, P3763, DOI 10.1002/elps.201000265
Zhang YZ, 2011, PLOS ONE, V6, DOI 10.1371/journal.pone.0020319
Zhao L, 2013, J BIOMED NANOTECHNOL, V9, P348, DOI 10.1166/jbn.2013.1546
Zheng Y, 2012, P NATL ACAD SCI USA, V109, P9342, DOI 10.1073/pnas.1201240109
Zhou JW, 2010, ELECTROPHORESIS, V31, P2, DOI 10.1002/elps.200900475
NR 125
TC 60
Z9 62
U1 31
U2 253
PU INFORMA HEALTHCARE
PI LONDON
PA TELEPHONE HOUSE, 69-77 PAUL STREET, LONDON EC2A 4LQ, ENGLAND
SN 1746-0441
EI 1746-045X
J9 EXPERT OPIN DRUG DIS
JI Expert. Opin. Drug Discov.
PD APR
PY 2014
VL 9
IS 4
BP 335
EP 352
DI 10.1517/17460441.2014.886562
PG 18
WC Pharmacology & Pharmacy
SC Pharmacology & Pharmacy
GA AD4EE
UT WOS:000333199000001
PM 24620821
DA 2018-01-22
ER

PT J
AU Matsumura, R
Shiomi, M
Miyashita, T
Ishiguro, H
Hagita, N
AF Matsumura, Reo
Shiomi, Masahiro
Miyashita, Takahiro
Ishiguro, Hiroshi
Hagita, Norihiro
TI Who is Interacting With me? Identification of an Interacting Person
Through Playful Interaction With a Small Robot
SO IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS
LA English
DT Article
DE Human-robot interaction; humanoid robot; identification of persons
AB Small robots are being designed to recognize behaviors through playful
interaction. Prior work used data from impoverished sensing devices such as
inertial sensors to analyze gestures and attitude in playful interaction through
time series analysis. However, the prior work did not focus on individual
differences required for person identification. This research hypothesizes that
person identification can be achieved by determining individual differences in
playful interaction by using inertial sensor data. We propose a method that
iteratively narrows down the candidates during interaction to achieve accurate
person identification. This method calculates the features using a time series of
the inertial sensor data. These features identify a candidate who is playfully
interacting with the robot using a decision tree classifier that includes
combinations of the current candidates. The system stores the results as a dataset
for voting, and the voting results are used to reduce the candidates until the
number of candidates is winnowed to one. Evaluation results show that our proposed
method identifies persons through playful interactions with 99.1% accuracy.
C1 [Matsumura, Reo; Shiomi, Masahiro; Miyashita, Takahiro; Hagita, Norihiro] ATR
Intelligent Robot & Commun Labs, Kyoto 6190288, Japan.
[Ishiguro, Hiroshi] Osaka Univ, Osaka 5650871, Japan.
RP Matsumura, R (reprint author), Univ Tokyo, Tokyo 1138654, Japan.
EM reo.matsumura@gmail.com; m-shiomi@atr.jp; miyasita@atr.jp;
ishiguro@sys.es.osaka-u.ac.jp; hagita@atr.jp
OI SHIOMI, Masahiro/0000-0003-4338-801X
FU Strategic Information and Communications R&D Promotion Programme;
Ministry of Internal Affairs and Communications [132107010]
FX This work was supported by the Strategic Information and Communications
R&D Promotion Programme and the Ministry of Internal Affairs and
Communications (132107010). This paper was recommended by Associate
Editor A. Howard.
CR Cooney M. D., 2010, P IEEE RSJ INT C INT
Cooney M. D., 2011, P 11 IEEE RAS INT C, P112
Francois D., 2007, P 2 ACM IEEE INT C H, P295
Francois D., 2008, P IEEE RAS INT C HUM, P353
Fu TC, 2011, ENG APPL ARTIF INTEL, V24, P164, DOI 10.1016/j.engappai.2010.09.007
Giguere P, 2011, IEEE T ROBOT, V27, P534, DOI 10.1109/TRO.2011.2119910
GOWER JC, 1966, BIOMETRIKA, V53, P325, DOI 10.1093/biomet/53.3-4.325
Ikeda T., 2012, P INT C PERV EMB COM
Kanda T, 2004, HUM-COMPUT INTERACT, V19, P61, DOI 10.1207/s15327051hci1901&2_4
Kanda T, 2010, IEEE T ROBOT, V26, P897, DOI 10.1109/TRO.2010.2062550
Lee J. K., 2008, P IEEE INT S ROB HUM
Matsumura R, 2008, INT J HUM ROBOT, V5, P353, DOI 10.1142/S0219843608001467
Miyashita T., 2006, P IEEE RSJ INT C INT, P3468
Quinlan J. R., 1993, C4 5 PROGRAMS MACHIN
Sabelli AM, 2011, ACMIEEE INT CONF HUM, P37, DOI 10.1145/1957656.1957669
Salter T., 2005, P IEEE INT WORKSH RO, P178
Salter T., 2007, P 2 ACM IEEE INT C H
Shibata T, 2001, IEEE INT CONF ROBOT, P2572, DOI 10.1109/ROBOT.2001.933010
Stiehl W. D., 2005, P IEEE INT WORKSH RO
Tanaka F., 2006, P 1 ACM SIGCHI SIGAR, P3, DOI DOI 10.1145/1121241.1121245
Tanaka F, 2007, P NATL ACAD SCI USA, V104, P17954, DOI 10.1073/pnas.0707769104
Vapnik V, 1995, NATURE STAT LEARNING
Wada K, 2007, IEEE T ROBOT, V23, P972, DOI 10.1109/TRO.2007.906261
Woodman O., 2008, P 10 INT C UB COMP, P114, DOI DOI 10.1145/1409635.1409651
Ye LX, 2011, DATA MIN KNOWL DISC, V22, P149, DOI 10.1007/s10618-010-0179-5
NR 25
TC 3
Z9 3
U1 0
U2 6
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 2168-2291
EI 2168-2305
J9 IEEE T HUM-MACH SYST
JI IEEE T. Hum.-Mach. Syst.
PD APR
PY 2014
VL 44
IS 2
BP 169
EP 179
DI 10.1109/THMS.2013.2296872
PG 11
WC Computer Science, Artificial Intelligence; Computer Science, Cybernetics
SC Computer Science
GA AD2XQ
UT WOS:000333100500002
DA 2018-01-22
ER

PT J
AU Baddoura, R
Venture, G
AF Baddoura, Ritta
Venture, Gentiane
TI Human motion characteristics in relation to feeling familiar or
frightened during an announced short interaction with a proactive
humanoid
SO FRONTIERS IN NEUROROBOTICS
LA English
DT Article
DE social robotics; human-robot interaction; affective state; motion
measures; mathematical modeling; assistive robot; familiar; fear
ID EXPRESSIVE BODY MOVEMENT; EMOTION; ROBOTS; PERCEPTION
AB During an unannounced encounter between two humans and a proactive humanoid
(NAO, Aldebaran Robotics), we study the dependencies between the human partners'
affective experience (measured via the answers to a questionnaire) particularly
regarding feeling familiar and feeling frightened, and their arm and head motion
[frequency and smoothness using Inertial Measurement Units (IMU)]. NAO starts and
ends its interaction with its partners by non-verbally greeting them hello (bowing)
and goodbye (moving its arm). The robot is invested with a real and useful task to
perform: handing each participant an envelope containing a questionnaire they need
to answer. NAO's behavior varies from one partner to the other (Smooth with X vs.
Resisting with Y). The results show high positive correlations between feeling
familiar while interacting with the robot and: the frequency and smoothness of the
human arm movement when waving back goodbye, as well as the smoothness of the head
during the whole encounter. Results also show a negative dependency between feeling
frightened and the frequency of the human arm movement when waving back goodbye.
The principal component analysis (RCA) suggests that, in regards to the various
motion measures examined in this paper, the head smoothness and the goodbye gesture
frequency are the most reliable measures when it comes to considering the familiar
experienced by the participants. The RCA also points out the irrelevance of the
goodbye motion frequency when investigating the participants' experience of fear in
its relation to their motion characteristics. The results are discussed in light of
the major findings of studies on body movements and postures accompanying specific
emotions.
C1 [Baddoura, Ritta] Stem Cell & Brain Res Inst, INSERM, Cortical Networks Cognit
Interact, U846, Bron, France.
[Venture, Gentiane] Tokyo Univ Agr & Technol, Dept Mech Syst Engn, GV Lab,
Koganei, Tokyo, Japan.
RP Venture, G (reprint author), Tokyo Univ Agr & Technol, Dept Mech Syst Engn, GV
Lab, 2-24-16 Nakacho, Koganei, Tokyo, Japan.
EM venture@cc.tuat.ac.jp
RI Venture, Gentiane/E-7060-2013
OI Venture, Gentiane/0000-0001-7767-4765
FU Universite Paul-Valery Montpellier 3 (France); Women's Future
Development Organization
FX This research was partially supported by grants from Universite
Paul-Valery Montpellier 3 (France) and Women's Future Development
Organization.
CR Atkinson AP, 2004, PERCEPTION, V33, P717, DOI 10.1068/p5096
Baddoura R., 2012, P IEEE RAS INT C HUM, P234
BADDOURA R, 2013, INT WORKSH DEV SOC R, P25
Baddoura R, 2013, INT J SOC ROBOT, V5, P529, DOI 10.1007/s12369-013-0207-x
Bartneck C, 2009, 18 IEEE INT S ROB HU, P269, DOI DOI 10.1109/ROMAN.2009.5326351
BERNHARDT D, 2010, 787 UCAM COMP LAB, P787
Boone RT, 2001, J NONVERBAL BEHAV, V25, P21, DOI 10.1023/A:1006733123708
Boone RT, 1998, DEV PSYCHOL, V34, P1007, DOI 10.1037//0012-1649.34.5.1007
Brayda L, 2012, INT J SOC ROBOT, V4, P219, DOI 10.1007/s12369-012-0150-2
CAMRAS LA, 1993, J NONVERBAL BEHAV, V17, P171, DOI 10.1007/BF00986118
CANAMERO LD, 2002, SOCIALLY INTELLIGENT, V3, P69, DOI DOI 10.1007/0-306-47373-98
CASTELLANO G, 2007, P AFF COMP INT INT 2, V4738, P71
Dael N, 2013, PERCEPTION, V42, P642, DOI 10.1068/p7364
Dahl S, 2007, MUSIC PERCEPT, V24, P433, DOI 10.1525/MP.2007.24.5.433
Darwin C., 1872, EXPRESSION EMOTIONS
DEMEIJER M, 1989, J NONVERBAL BEHAV, V13, P247, DOI 10.1007/BF00990296
Karwowski W., 2001, ENCY HUMAN FACTORS E
EKMAN P, 1967, PERCEPT MOTOR SKILL, V24, P711, DOI 10.2466/pms.1967.24.3.711
Fanaswala I, 2011, ACMIEEE INT CONF HUM, P133, DOI 10.1145/1957656.1957697
Fischer K, 2011, ACMIEEE INT CONF HUM, P53, DOI 10.1145/1957656.1957672
Fong T, 2003, ROBOT AUTON SYST, V42, P143, DOI 10.1016/S0921-8890(02)00372-X
FREUD S, 1919, THE UNCANNY UNHEIMLI
Frijda N. H., 1986, EMOTIONS
Gillespie D. L., 1983, SOC THEORY, V1, P120, DOI [10.2307/202049, DOI
10.2307/202049]
Harrigan J.A., 2005, NEW HDB METHODS NONV, P137
Jentsch Ernst, 1906, PSYCHIAT NEUROLOGISC, V8, P203
Jentsch E., 1906, PSYCHIAT NEUROLOGISC, V22, P195
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kanda T., 2003, P INT JOINT C ART IN, P177
KIM A, 2013, 8 ACM IEEE INT C HUM, P159
Lee EJ, 2008, INT J HUM-COMPUT ST, V66, P789, DOI 10.1016/j.ijhcs.2008.07.009
Lee N., 2011, P 6 INT C HUM ROB IN, P183
Lv Y., 2010, 2010 2 WORLD C NABIC, P334
MILLER RL, 1976, J ABNORMAL SOCIAL PS, V59, P1, DOI DOI 10.1037/H0049330
Mori M., 1970, ENERGY, V7, P33, DOI DOI 10.1109/MRA.2012.2192811
Nehaniv C., 2005, P AISB 05 S ROB COMP, P74
Pollick FE, 2001, COGNITION, V82, pB51, DOI 10.1016/S0010-0277(01)00147-0
Rohrer B, 2002, J NEUROSCI, V22, P8297
Sabelli AM, 2011, ACMIEEE INT CONF HUM, P37, DOI 10.1145/1957656.1957669
Salem M, 2011, P INT C SOC ROB, P31
Scherer KR, 1990, ENZYKLOPADIE PSYCHOL, P345
TAKANO E, 2009, P IEEE RSJ INT C INT, P3721
Turkle S., 2006, AAAI TECHNICAL REPOR
Venture G, 2013, ACMIEEE INT CONF HUM, P247, DOI 10.1109/HRI.2013.6483594
Wallbott HG, 1998, EUR J SOC PSYCHOL, V28, P879, DOI 10.1002/(SICI)1099-
0992(1998110)28:6<879::AID-EJSP901>3.0.CO;2-W
WALTERS ML, 2008, P 17 IEEE INT S ROB, P707
WANG SH, 2012, P IEEE RAS 12 INT C, V2, P263
ZAJONC RB, 1968, J PERS SOC PSYCHOL, V9, P1, DOI 10.1037/h0025848
Zhang R, 2011, P 10 INT C AUT AG M, P1301
NR 49
TC 1
Z9 1
U1 1
U2 12
PU FRONTIERS RESEARCH FOUNDATION
PI LAUSANNE
PA PO BOX 110, LAUSANNE, 1015, SWITZERLAND
SN 1662-5218
J9 FRONT NEUROROBOTICS
JI Front. Neurorobotics
PD MAR 20
PY 2014
VL 8
AR 12
DI 10.3389/fnbot.2014.00012
PG 15
WC Computer Science, Artificial Intelligence; Robotics; Neurosciences
SC Computer Science; Robotics; Neurosciences & Neurology
GA CA3MJ
UT WOS:000348810600001
PM 24688466
OA gold
DA 2018-01-22
ER

PT J
AU Koike, H
Kanemasu, K
Itakura, K
Okazaki, S
Takamiya, M
Santos, EC
Kida, K
AF Koike, H.
Kanemasu, K.
Itakura, K.
Okazaki, S.
Takamiya, M.
Costa Santos, E.
Kida, K.
TI Measurement of fatigue and wear of PEEK bush and A7075 cam plate in
humanoid robot joints
SO MATERIALS RESEARCH INNOVATIONS
LA English
DT Article
DE Fatigue; Wear; PEEK; 7075 aluminium alloy; Backlash; Robot joint
ID PTFE RETAINER; DRY CONTACT; FRICTION; COMPOSITES; BEARINGS
AB Machine elements in humanoid robots for our life support require light weight,
high accuracy and high output torque. In order to achieve low transmission error of
the humanoid robot joints, wear of reinforced poly-ether-ether-ketone (PEEK) bushes
against the 7075 aluminium alloy cam plates is investigated. Sliding fatigue wear
tests of PEEK bushes in the humanoid robot leg joint were performed under various
load torques. In the case of the bush under 127 Nm load torque at 5.7 mm s(-1)
sliding velosity, the backlash values were stable after the running process in
fatigue wear test. PEEK film was formed by the friction between PEEK bush and 7075
cam plate under compression load. The oscillation angle of the output axis in the
robot joint was stable after early stage because of the solid lublication caused by
the film. This means the backlash value was stable and decreased due to the PEEK
film between the bush and the cam plate.
C1 [Koike, H.] Yokohama Natl Univ, Yokohama, Kanagawa 2408501, Japan.
[Kanemasu, K.] Yosinori Ind, Nishi Yodogawa ku, Osaka 5550034, Japan.
[Itakura, K.; Okazaki, S.; Takamiya, M.] Kyushu Univ, Nishi Ku, Fukuoka 8190395,
Japan.
[Costa Santos, E.] Univ Fed Santa Catarina, BR-88040900 Florianopolis, SC,
Brazil.
[Kida, K.] Toyama Univ, Toyama 9308555, Japan.
RP Koike, H (reprint author), Yokohama Natl Univ, 79-5 Tokiwadai, Yokohama,
Kanagawa 2408501, Japan.
EM koikeh@ynu.ac.jp
RI Koike, Hitonobu/G-3699-2011; Kida, Katsuyuki/G-2315-2011; U-ID,
Kyushu/C-5291-2016
OI Koike, Hitonobu/0000-0002-1307-7045; Kida,
Katsuyuki/0000-0002-9801-5205;
FU Strategic Fundamental Technologies Strengthening Assistance Program, the
Ministry of Economics, Trade and Industry, Japan [22152712210]
FX This research work was financial supported by Strategic Fundamental
Technologies Strengthening Assistance Program, the Ministry of
Economics, Trade and Industry, Japan (no. 22152712210).
CR Bahadur S, 2000, WEAR, V245, P92, DOI 10.1016/S0043-1648(00)00469-5
Bijwe J, 2005, WEAR, V258, P1536, DOI 10.1016/j.wear.2004.10.008
Dearn K. D., 2009, POLYM TRIBOLOGY, P470
FRIEDRICH K, 1991, WEAR, V148, P235, DOI 10.1016/0043-1648(91)90287-5
Hanchi J, 1997, WEAR, V203, P380, DOI 10.1016/S0043-1648(96)07347-4
Harrass M, 2010, TRIBOL INT, V43, P635, DOI 10.1016/j.triboint.2009.10.003
Itakura K, 2012, ADV MATER RES-SWITZ, V566, P348, DOI
10.4028/www.scientific.net/AMR.566.348
JEONG KS, 1995, COMPOS STRUCT, V32, P557, DOI 10.1016/0263-8223(95)80024-7
Koike H., 2012, POLYM POLYM COMPOS, V20, P127
Mizobe K, 2013, ADV MATER RES-SWITZ, V683, P90, DOI
10.4028/www.scientific.net/AMR.683.90
Mizobe K, 2012, ADV MATER RES-SWITZ, V457-458, P557, DOI
10.4028/www.scientific.net/AMR.457-458.557
Qiao HB, 2007, TRIBOL INT, V40, P105, DOI 10.1016/j.triboint.2006.02.069
Sakamoto H, 2009, INT J HUM ROBOT, V6, P565, DOI 10.1142/S0219843609001917
VOORT JV, 1995, WEAR, V181, P212, DOI 10.1016/0043-1648(95)90026-8
Zhang G, 2010, WEAR, V268, P893, DOI 10.1016/j.wear.2009.12.001
NR 15
TC 1
Z9 1
U1 0
U2 13
PU MANEY PUBLISHING
PI LEEDS
PA STE 1C, JOSEPHS WELL, HANOVER WALK, LEEDS LS3 1AB, W YORKS, ENGLAND
SN 1432-8917
EI 1433-075X
J9 MATER RES INNOV
JI Mater. Res. Innov.
PD MAR
PY 2014
VL 18
SU 1
BP 38
EP 43
DI 10.1179/1432891713Z.000000000363
PG 6
WC Materials Science, Multidisciplinary
SC Materials Science
GA AI6GO
UT WOS:000336970300010
DA 2018-01-22
ER

PT J
AU Ayusawa, K
Venture, G
Nakamura, Y
AF Ayusawa, Ko
Venture, Gentiane
Nakamura, Yoshihiko
TI Identifiability and identification of inertial parameters using the
underactuated base-link dynamics for legged multibody systems
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE inertial parameters; identification; identifiability; Humanoid robots;
free-floating systems
ID BIPED LOCOMOTION; HUMANOID ROBOT; WALKING; TRAJECTORIES; MANIPULATORS;
COMPUTATION; KINEMATICS; EXCITATION; MODELS; MOTION
AB In this paper we study the dynamics of multibody systems with the base not
permanently fixed to the inertial frame, or more specifically legged systems such
as humanoid robots and humans. The issue is to be approached in terms of the
identification theory developed in the field of robotics. The under-actuated base-
link which characterizes the dynamics of legged systems is the focus of this work.
The useful mechanical feature to analyze the dynamics of legged system is proven:
the set of inertial parameters appearing in the equation of motion of the under-
actuated base is equivalent to the set in the equations of the whole body. In
particular, when no external force acts on the system, all of the parameters in the
set except the total mass are generally identifiable only from the observation of
the free-flying motion. We also propose a method to identify the inertial
parameters based on the dynamics of the under-actuated base. The method does not
require the measurement of the joint torques. Neither the joint frictions nor the
actuator dynamics need to be considered. Even when the system has no external
reaction force, the method is still applicable. The method has been tested on both
a humanoid robot and a human, and the experimental results are shown.
C1 [Ayusawa, Ko; Nakamura, Yoshihiko] Univ Tokyo, Dept Mechanoinformat, Bunkyo Ku,
Tokyo 1138656, Japan.
[Venture, Gentiane] Tokyo Univ Agr & Technol, Dept Mech Syst Engn, Tokyo, Japan.
RP Ayusawa, K (reprint author), Univ Tokyo, Dept Mechanoinformat, Bunkyo Ku, 7-3-1
Hongo, Tokyo 1138656, Japan.
EM ayusawa@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014; Venture, Gentiane/E-7060-2013
OI Nakamura, Yoshihiko/0000-0001-7162-5102; Venture,
Gentiane/0000-0001-7767-4765
FU Japan Society for the Promotion of Science for Young Scientists
[22-7985]; Japan Society for the Promotion of Science [20220001];
Special Coordination Funds for Promoting Science and Technology: "IRT
Foundation to Support Man and Aging Society"
FX The first author was supported by a Grant-in-Aid for Scientific Research
(grant number 22-7985) for Research Fellowships of the Japan Society for
the Promotion of Science for Young Scientists. This research was
supported by Category S of Grant-in-Aid for Scientific Research (grant
number 20220001) of the Japan Society for the Promotion of Science, and
the Special Coordination Funds for Promoting Science and Technology:
"IRT Foundation to Support Man and Aging Society".
CR ATKESON CG, 1986, INT J ROBOT RES, V5, P101, DOI 10.1177/027836498600500306
Ayusawa K, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P2854, DOI
10.1109/IROS.2008.4650614
Baerlocher P., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P2557, DOI 10.1109/ROBOT.2000.846413
BELLMAN R, 1970, Mathematical Biosciences, V7, P329, DOI 10.1016/0025-
5564(70)90132-X
Cheng G, 2006, P IEEE RAS INT C HUM, P182
DELP SL, 1995, COMPUT BIOL MED, V25, P21, DOI 10.1016/0010-4825(95)98882-E
DUBOWSKY S, 1993, IEEE T ROBOTIC AUTOM, V9, P531, DOI 10.1109/70.258046
Fujimoto Y, 1998, IEEE INT CONF ROBOT, P2030, DOI 10.1109/ROBOT.1998.680613
FURUSHO J, 1986, J DYN SYST-T ASME, V108, P111, DOI 10.1115/1.3143752
GAUTIER M, 1992, INT J ROBOT RES, V11, P362, DOI 10.1177/027836499201100408
GAUTIER M, 1990, P IEEE INT C ROB AUT, V2, P1020
GOLUB GH, 1980, SIAM J NUMER ANAL, V17, P883, DOI 10.1137/0717073
Golub G.H, 1983, MATRIX COMPUTATIONS
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
KADABA MP, 1990, J ORTHOPAED RES, V8, P383, DOI 10.1002/jor.1100080310
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
Kajita S., 2003, P IEEE RSJ INT C INT, V2, P1644
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Kato I., 1973, BIOMECHANISM, V2, P173
Kawasaki H., 1991, Proceedings IECON '91. 1991 International Conference on
Industrial Electrlnics, Control and Instrumentation (Cat. No.91CH2976-9), P1100,
DOI 10.1109/IECON.1991.239286
KHALIL W, 1995, INT J ROBOT RES, V14, P112, DOI 10.1177/027836499501400202
Khalil W., 2002, MODELING IDENTIFICAT
Khalil W, 2007, IEEE INT CONF ROBOT, P4943, DOI 10.1109/ROBOT.2007.364241
KHATIB O, 1987, IEEE T ROBOTIC AUTOM, V3, P43, DOI 10.1109/JRA.1987.1087068
Landau L. D., 1976, COURSE THEORETICAL P, V1
LIU G, 1998, P IEEE INT C ROB AUT, P3316
MAYEDA H, 1990, IEEE T ROBOTIC AUTOM, V6, P312, DOI 10.1109/70.56663
Mayeda H, 1984, P IFAC 9 WORLD C, V2, P74
MIYAZAKI F, 1980, J DYN SYST-T ASME, V102, P233, DOI 10.1115/1.3149608
MUROTSU Y, 1994, J GUID CONTROL DYNAM, V17, P488, DOI 10.2514/3.21225
Nakamura Y, 2005, IEEE T ROBOT, V21, P58, DOI 10.1109/TRO.2004.833798
NAKAMURA Y, 1993, IEEE T ROBOTIC AUTOM, V9, P499, DOI 10.1109/70.246062
Nakamura Y, 2000, IEEE T ROBOTIC AUTOM, V16, P124, DOI 10.1109/70.843167
Nakanishi J, 2008, INT J ROBOT RES, V27, P737, DOI 10.1177/0278364908091463
Park KJ, 2006, ROBOTICA, V24, P625, DOI 10.1017/S0263574706002712
Slotione J.-J.E., 1991, APPL NONLINEAR CONTR
SUGIHARA T, 2005, P IEEE RSJ INT C INT, P2869
Sugihara T., 2002, P IEEE INT C ROB AUT, V2, P1404, DOI DOI
10.1109/ROBOT.2002.1014740
Sujan VA, 2003, IEEE-ASME T MECH, V8, P215, DOI 10.1109/TMECH.2003.812830
Swevers J, 1997, IEEE T ROBOTIC AUTOM, V13, P730, DOI 10.1109/70.631234
Teran J, 2005, IEEE T VIS COMPUT GR, V11, P317, DOI 10.1109/TVCG.2005.42
VENTURE G, 2008, P CISM IFTOMM S ROB, P301
Venture G., 2009, P IEEE RAS INT C ROB, P1226
Venture G, 2006, IEEE T INTELL TRANSP, V7, P349, DOI 10.1109/TITS.2006.880620
Venture G, 2009, INT J ROBOT RES, V28, P1322, DOI 10.1177/0278364909103786
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
Yoshida K., 1995, P 7 INT S ROB RES, P100
Yoshida K., 2002, P AIAA GUID NAV CONT
NR 48
TC 22
Z9 22
U1 0
U2 16
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD MAR
PY 2014
VL 33
IS 3
BP 446
EP 468
DI 10.1177/0278364913495932
PG 23
WC Robotics
SC Robotics
GA AG6SK
UT WOS:000335548500005
DA 2018-01-22
ER

PT J
AU Ugurlu, B
Kawamura, A
AF Ugurlu, Barkan
Kawamura, Atsuo
TI On the Backward Hopping Problem of Legged Robots
SO IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
LA English
DT Article
DE Angular momentum; backward hopping; humanoid; jumping robot; legged
locomotion
AB When realizing jumping locomotion on legged robots, trajectories that are
generated without assessing angular momentum information are found to be prone to
backward hopping motions, due to the undesired body rotations developed during the
support phase, prior to liftoff. In this letter, we succinctly discuss the
underlying characteristics of such an undesired motion and propose a possible
solution that is based on our previous work. In order to validate our claims, we
conducted vertical jumping experiments on the actual bipedal robot MARI-3. As the
result, the robot is able to exhibit dynamically balanced jumping motions without
any backward hopping motion.
C1 [Ugurlu, Barkan] Adv Telecommun Res Inst Int ATR, Dept Brain Robot Interface,
Computat Neurosci Labs, Kyoto 6190288, Japan.
[Kawamura, Atsuo] Yokohama Natl Univ, Grad Sch Engn, Yokohama, Kanagawa 2408501,
Japan.
RP Ugurlu, B (reprint author), Adv Telecommun Res Inst Int ATR, Dept Brain Robot
Interface, Computat Neurosci Labs, Kyoto 6190288, Japan.
EM barkanu@ieee.org; kawamura@ynu.ac.jp
RI kawamura, atsuo/L-8158-2014; Ugurlu, Barkan/A-4793-2015
OI kawamura, atsuo/0000-0002-3085-2314; Ugurlu, Barkan/0000-0002-9124-7441
CR Erbatur K, 2009, IEEE T IND ELECTRON, V56, P835, DOI 10.1109/TIE.2008.2005150
Kajita S, 2007, IEEE INT CONF ROBOT, P3963, DOI 10.1109/ROBOT.2007.364087
Kang Y, 2012, IEEE T IND ELECTRON, V59, P1050, DOI 10.1109/TIE.2011.2162709
Lee SH, 2012, AUTON ROBOT, V33, P399, DOI 10.1007/s10514-012-9294-z
Maeda T., 2009, THESIS YOKOHAMA NAT
Motoi N., 2014, IEEE T IND ELECT, V61, P1009
Smadi IA, 2012, IEEE T IND ELECTRON, V59, P2208, DOI 10.1109/TIE.2011.2148687
STRIKWERDA JC, 1989, FINITE DIFFERENCE SC, P74
Suzuki K., 2006, THESIS YOKOHAMA NAT
Ugurlu B, 2012, IEEE T ROBOT, V28, P1406, DOI 10.1109/TRO.2012.2210478
Ugurlu B, 2010, IEEE T IND ELECTRON, V57, P1701, DOI 10.1109/TIE.2009.2032439
NR 11
TC 9
Z9 9
U1 1
U2 80
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 0278-0046
J9 IEEE T IND ELECTRON
JI IEEE Trans. Ind. Electron.
PD MAR
PY 2014
VL 61
IS 3
BP 1632
EP 1634
DI 10.1109/TIE.2013.2259776
PG 3
WC Automation & Control Systems; Engineering, Electrical & Electronic;
Instruments & Instrumentation
SC Automation & Control Systems; Engineering; Instruments & Instrumentation
GA 208ZA
UT WOS:000323720400046
DA 2018-01-22
ER

PT J
AU Nam, Y
Koo, B
Cichocki, A
Choi, S
AF Nam, Yunjun
Koo, Bonkon
Cichocki, Andrzej
Choi, Seungjin
TI GOM-Face: GKP, EOG, and EMG-Based Multimodal Interface With Application
to Humanoid Robot Control
SO IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING
LA English
DT Article
DE Electromyogram (EMG); electrooculogram (EOG); glossokinetic potentials
(GKP); human-machine interface; multimodal interface
ID EEG; ELECTROOCULOGRAPHY
AB We present a novel human-machine interface, called GOM-Face, and its application
to humanoid robot control. The GOM-Face bases its interfacing on three electric
potentials measured on the face: 1) glossokinetic potential (GKP), which involves
the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3)
electromyogram, which involves the teeth clenching. Each potential has been
individually used for assistive interfacing to provide persons with limb motor
disabilities or even complete quadriplegia an alternative communication channel.
However, to the best of our knowledge, GOM-Face is the first interface that
exploits all these potentials together. We resolved the interference between GKP
and EOG by extracting discriminative features from two covariance matrices: a
tongue-movement-only data matrix and eye-movement-only data matrix. With the
feature extraction method, GOM-Face can detect four kinds of horizontal tongue or
eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the
applicability of the GOM-Face to humanoid robot control: users were able to
communicate with the robot by selecting from a predefined menu using the eye and
tongue movements.
C1 [Nam, Yunjun; Koo, Bonkon] Pohang Univ Sci & Technol, Sch Interdisciplinary
Biosci & Bioengn, Pohang 790784, South Korea.
[Cichocki, Andrzej] RIKEN, Brain Sci Inst, Wako, Saitama 3510198, Japan.
[Cichocki, Andrzej] Polish Acad Sci, Syst Res Inst, PL-01447 Warsaw, Poland.
[Choi, Seungjin] Pohang Univ Sci & Technol, Dept Comp Sci & Engn, Pohang 790784,
South Korea.
[Choi, Seungjin] Pohang Univ Sci & Technol, Div IT Convergence Engn, Pohang
790784, South Korea.
RP Choi, S (reprint author), Pohang Univ Sci & Technol, Dept Comp Sci & Engn,
Pohang 790784, South Korea.
EM druid@postech.ac.kr; bkkoo@postech.ac.kr; a.cichocki@brain.riken.jp;
seungjin@postech.ac.kr
RI Cichocki, Andrzej/A-1545-2015
OI Cichocki, Andrzej/0000-0002-8364-7226
FU National Research Foundation of Korea [NRF-2010-0018828,
NRF-2010-0018829]; NRF World Class University Program [R31-10100]
FX This work was supported by the National Research Foundation of Korea
(NRF-2010-0018828 and NRF-2010-0018829), NRF World Class University
Program (R31-10100). A portion of this work was carried out while Y. Nam
was visiting the Lab for Advanced Brain Signal Processing, Brain Science
Institute, RIKEN, W Japan. Asterisk indicates corresponding author.
CR Barea R, 2002, IEEE T NEUR SYS REH, V10, P209, DOI 10.1109/TNSRE.2002.806829
Blankertz B, 2008, IEEE SIGNAL PROC MAG, V25, P41, DOI 10.1109/MSP.200790.900,9
Brewster SA, 1999, BEHAV INFORM TECHNOL, V18, P165, DOI 10.1080/014492999119066
Bulling A, 2009, J AMB INTEL SMART EN, V1, P157, DOI 10.3233/AIS-2009-0020
Chang C-C., 2011, ACM T INTELL SYST TE, V2, P27
Fisch B. J., 1999, FISCH SPEHLMANNS EEG
Fukunaga Keinosuke, 1990, INTRO STAT PATTERN R
Hart S. G., 1988, HUMAN MENTAL WORKLOA, V1, P139, DOI DOI 10.1016/S0166-
4115(08)62386-9
Huo XL, 2009, IEEE T BIO-MED ENG, V56, P1719, DOI 10.1109/TBME.2009.2018632
KLASS D, 1960, ELECTROEN CLIN NEURO, V12, P239
KOLES ZJ, 1991, ELECTROEN CLIN NEURO, V79, P440, DOI 10.1016/0013-4694(91)90163-
X
Krishnamurthy G., 2006, P IEEE INT S CIRC SY, P5551
Nam Y, 2012, IEEE T BIO-MED ENG, V59, P290, DOI 10.1109/TBME.2011.2174058
Nutt W, 1998, J MICROMECH MICROENG, V8, P155, DOI 10.1088/0960-1317/8/2/028
Oviatt S. L., 2003, HUM FAC ER, P286
Pfurtscheller Gert, 2010, Front Neurosci, V4, P30, DOI 10.3389/fnpro.2010.00003
Postelnicu CC, 2012, EXPERT SYST APPL, V39, P10857, DOI
10.1016/j.eswa.2012.03.007
Ramoser H, 2000, IEEE T REHABIL ENG, V8, P441, DOI 10.1109/86.895946
Sharma R, 1998, P IEEE, V86, P853, DOI 10.1109/5.664275
Tallgren P., 2006, THESIS HELSINKI U TE
Tsui CSL, 2007, 2007 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS,
VOLS 1-5, P1266
Tyner FS, 1983, FUNDAMENTALS EEG TEC, V1
Usakli AB, 2009, IEEE ENG MED BIO, P543, DOI 10.1109/IEMBS.2009.5333742
Yagi T, 2006, IEEE SYS MAN CYBERN, P3222, DOI 10.1109/ICSMC.2006.384613
Zander TO, 2012, J NEURAL ENG, V9, DOI 10.1088/1741-2560/9/1/016003
Zhao SD, 2007, CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, VOLS 1 AND 2,
P1395
NR 26
TC 14
Z9 17
U1 1
U2 16
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 0018-9294
EI 1558-2531
J9 IEEE T BIO-MED ENG
JI IEEE Trans. Biomed. Eng.
PD FEB
PY 2014
VL 61
IS 2
BP 453
EP 462
DI 10.1109/TBME.2013.2280900
PG 10
WC Engineering, Biomedical
SC Engineering
GA AD5CB
UT WOS:000333268000025
PM 24021635
DA 2018-01-22
ER

PT J
AU Murata, S
Arie, H
Ogata, T
Sugano, S
Tani, J
AF Murata, Shingo
Arie, Hiroaki
Ogata, Tetsuya
Sugano, Shigeki
Tani, Jun
TI Learning to generate proactive and reactive behavior using a dynamic
neural network model with time-varying variance prediction mechanism
SO ADVANCED ROBOTICS
LA English
DT Article
DE proactive behavior; reactive behavior; recurrent neural network;
humanoid robot; imitation
ID HUMANOID ROBOT; MOTOR CONTROL; IMITATION; SYSTEMS; TASK
AB This paper discusses a possible neurodynamic mechanism that enables self-
organization of two basic behavioral modes, namely a 'proactive mode' and a
'reactive mode,' and of autonomous switching between these modes depending on the
situation. In the proactive mode, actions are generated based on an internal
prediction, whereas in the reactive mode actions are generated in response to
sensory inputs in unpredictable situations. In order to investigate how these two
behavioral modes can be self-organized and how autonomous switching between the two
modes can be achieved, we conducted neurorobotics experiments by using our recently
developed dynamic neural network model that has a capability to learn to predict
time-varying variance of the observable variables. In a set of robot experiments
under various conditions, the robot was required to imitate other's movements
consisting of alternating predictable and unpredictable patterns. The experimental
results showed that the robot controlled by the neural network model was able to
proactively imitate predictable patterns and reactively follow unpredictable
patterns by autonomously switching its behavioral modes. Our analysis revealed that
variance prediction mechanism can lead to self-organization of these abilities with
sufficient robustness and generalization capabilities.
C1 [Murata, Shingo] Waseda Univ, Grad Sch Creat Sci & Engn, Dept Modern Mech Engn,
Shinjuku Ku, Tokyo 1698555, Japan.
[Arie, Hiroaki; Ogata, Tetsuya; Sugano, Shigeki] Waseda Univ, Fac Sci & Engn,
Shinjuku Ku, Tokyo 1698555, Japan.
[Tani, Jun] Korea Adv Inst Sci & Technol, Dept Elect Engn, Taejon 305701, South
Korea.
RP Tani, J (reprint author), Korea Adv Inst Sci & Technol, Dept Elect Engn, 291
Daehak Ro 373-1 Guseong Dong, Taejon 305701, South Korea.
EM tani@kaist.ac.kr
RI Tani, Jun/H-3681-2012
OI Ogata, Tetsuya/0000-0001-7015-0379
FU Institute for Infocomm Research, Agency for Science, Technology and
Research (A*STAR) Singapore [AH/OCL/1082/0111/I2R]; JSPS [25220005]
FX A part of this work was supported by Institute for Infocomm Research,
Agency for Science, Technology and Research (A*STAR) Singapore [grant
number AH/OCL/1082/0111/I2R]; JSPS Grant-in-Aid for Scientific Research
(S) [grant number 25220005].
CR Braver TS, 2012, TRENDS COGN SCI, V16, P106, DOI 10.1016/j.tics.2011.12.010
Breazeal C, 2002, TRENDS COGN SCI, V6, P481, DOI 10.1016/S1364-6613(02)02016-8
Butz M. V., 2007, ANTICIPATORY BEHAV A
Calinon S, 2007, IEEE T SYST MAN CY B, V37, P286, DOI 10.1109/TSMCB.2006.886952
Calinon S, 2009, ADV ROBOTICS, V23, P2059, DOI 10.1163/016918609X12529294461843
Carr L, 2003, P NATL ACAD SCI USA, V100, P5497, DOI 10.1073/pnas.0935845100
ELMAN JL, 1990, COGNITIVE SCI, V14, P179, DOI 10.1016/0364-0213(90)90002-E
Falck-Ytter T, 2006, NAT NEUROSCI, V9, P878, DOI 10.1038/nn1729
Flanagan JR, 2003, NATURE, P424
Friston KJ, 2011, BIOL CYBERN, V104, P137, DOI 10.1007/s00422-011-0424-z
Friston KJ, 2009, TRENDS COGN SCI, V13, P293, DOI 10.1016/j.tics.2009.04.005
Grafton ST, 2001, J COGNITIVE NEUROSCI, V13, P217, DOI 10.1162/089892901564270
Iacoboni M, 1999, SCIENCE, V286, P2526, DOI 10.1126/science.286.5449.2526
Ito M, 2004, ADAPT BEHAV, V12, P93, DOI 10.1177/105971230401200202
Ito M, 2006, NEURAL NETWORKS, V19, P323, DOI 10.1016/j.neunet.2006.02.007
Johansson RS, 2001, J NEUROSCI, V21, P6917
JORDAN MI, 1992, COGNITIVE SCI, V16, P307, DOI 10.1016/0364-0213(92)90036-T
MELTZOFF AN, 1977, SCIENCE, V198, P75, DOI 10.1126/science.198.4312.75
Murata S, 2013, IEEE T AUTON MENT DE, V5, P298, DOI 10.1109/TAMD.2013.2258019
Namikawa J., 2013, ADV COGNITIVE NEUROD, P615
PEW RW, 1974, BRAIN RES, V71, P393, DOI 10.1016/0006-8993(74)90983-4
Piajet J, 1962, PLAY DREAMS IMITATIO
RUMELHART DE, 1986, NATURE, V323, P533, DOI 10.1038/323533a0
Stefan S, 1999, TRENDS COGN SCI, V3, P233
Tani J, 1996, IEEE T SYST MAN CY B, V26, P421, DOI 10.1109/3477.499793
Tani J, 2003, NEURAL NETWORKS, V16, P11, DOI 10.1016/S0893-6080(02)00214-9
Williams RJ, 1989, NEURAL COMPUT, V1, P270, DOI 10.1162/neco.1989.1.2.270
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
Yamashita Y, 2008, PLOS COMPUT BIOL, V4, DOI 10.1371/journal.pcbi.1000220
NR 29
TC 3
Z9 3
U1 0
U2 2
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2014
VL 28
IS 17
BP 1189
EP 1203
DI 10.1080/01691864.2014.916628
PG 15
WC Robotics
SC Robotics
GA AN2DL
UT WOS:000340393900005
DA 2018-01-22
ER

PT J
AU Kamide, H
Kawabe, K
Shigemi, S
Arai, T
AF Kamide, Hiroko
Kawabe, Koji
Shigemi, Satoshi
Arai, Tatsuo
TI Relationship between familiarity and humanness of robots -
quantification of psychological impressions toward humanoid robots
SO ADVANCED ROBOTICS
LA English
DT Article
DE humanoids; familiarity; humanness; psychological impressions
ID UNCANNY VALLEY; BODY
AB We investigate the relationships between familiarity and humanness of humanoids
from psychological perspective of view. Two thousand six hundred and twenty-four
people from their 10s to 70s in Japan evaluated familiarity and humanness of
different 11 humanoids based on the psychological scales which are developed for
evaluation of humanoids. Results showed that robot-like robots which have round
shapes are evaluated higher in familiarity and lower in humanness than robot-like
robots which have mechanic appearance. Androids which have totally human-like
appearances are evaluated higher in humanness and familiarity than robots which are
partly human-like, while one android was rated high in humanness but low in
familiarity. Relatedly, repulsion which reflects a value toward the existence of
humanoids bears a proportionate relationship to humanness. Higher humanness with
humanoids related to higher repulsion. In sum, the relationship between familiarity
and humanness is curvilinear, while repulsion and humanness shows a linear
relationship. We discussed the importance of appearance of robots and interaction
with robots to introduce robots toward ordinary people.
C1 [Kamide, Hiroko; Arai, Tatsuo] Osaka Univ, Grad Sch Engn Sci, Toyonaka, Osaka
5608531, Japan.
[Kawabe, Koji; Shigemi, Satoshi] Honda Res & Dev Co Ltd, Wako, Saitama 3510188,
Japan.
RP Kamide, H (reprint author), Osaka Univ, Grad Sch Engn Sci, 1-3 Machikaneyamacho,
Toyonaka, Osaka 5608531, Japan.
EM kamide@arai-lab.sys.es.osaka-u.ac.jp
FU JSPS KAKENHI [23730579]; MEXT [25119505]
FX This work was supported by JSPS KAKENHI [grant number 23730579] and by
MEXT Grant-in-Aid for Scientific Research on Innovative Areas
"Constructive Developmental Science" [grant number 25119505].
CR Bartneck C, 2008, 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, VOLS 1 AND 2, P553, DOI 10.1109/ROMAN.2008.4600724
Chikaraishi T, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P326, DOI
10.1109/IROS.2008.4650899
Goetz J, 2003, RO-MAN 2003: 12TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, PROCEEDINGS, P55
Inoue K., 2007, P 13 INT C ADV ROB K, P1005
Kakio M, 2008, J ROB SOC JPN, V26, P485
Kamide H, 2012, P 7 ANN ACM IEEE INT, P49
Kamide H, 2013, ADV ROBOTICS, V27, P3, DOI 10.1080/01691864.2013.751159
Kamide H, 2012, 2012 IEEE WORKSHOP ON ADVANCED ROBOTICS AND ITS SOCIAL IMPACTS
(ARSO), P40, DOI 10.1109/ARSO.2012.6213396
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Kaneko K, 2009, P IEEE RAS INT C HUM, P7, DOI DOI 10.1109/ICHR.2009.5379537
Kawamura S., 2001, J ROB SOC JPN, V19, P810
KUNSTWILSON WR, 1980, SCIENCE, V207, P557, DOI 10.1126/science.7352271
Liu CY, 2012, METHODS MOL BIOL, V836, P285, DOI 10.1007/978-1-61779-498-8_18
MacDorman KF, 2006, INTERACT STUD, V7, P297, DOI 10.1075/is.7.3.03mac
MacDorman K. F., 2006, ICCS COGSCI 2006 LON
MacDorman K. F., 2005, P 27 ANN M COGN SCI
Minato T., 2009, P WORKSH SYN INT APP
Minato T, 2007, IEEE-RAS INT C HUMAN, P557, DOI 10.1109/ICHR.2007.4813926
MORELAND RL, 1977, J PERS SOC PSYCHOL, V35, P191, DOI 10.1037/0022-3514.35.4.191
Mori M., 1970, ENERGY, V7, P33, DOI DOI 10.1109/MRA.2012.2192811
Nakano Y.I., 2003, P 41 ANN M ASS COMP, P553
NAKATA T, 1997, J JAPAN ROBOTICS SOC, V15, P1068
Newcomb Theodore M, 1961, ACQUAINTANCE PROCESS
Nomura T, 2012, P IEEE RAS INT C HUM, P242
Ogawa K., 2009, P 18 INT S ROB HUM I, P553
Nass C., 1996, MEDIA EQUATION PEOPL
Research Committee in Human Friendly Robot, 1998, J ROBOTICS SOC JAPAN, V16,
P288
Seyama J, 2007, PRESENCE-TELEOP VIRT, V16, P337, DOI 10.1162/pres.16.4.337
Shigemi S, HONDA TECH REV, V18, P38
Steckenfinger SA, 2009, P NATL ACAD SCI USA, V106, P18362, DOI
10.1073/pnas.0910063106
Tojo T, 2000, IEEE SYS MAN CYBERN, P858, DOI 10.1109/ICSMC.2000.885957
Ujiie Y, 2006, P INT S SYST HUM SCI
Yamamoto D, 2005, P 4 IEEE INT C DEV L, P49
NR 33
TC 2
Z9 2
U1 0
U2 6
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2014
VL 28
IS 12
BP 821
EP 832
DI 10.1080/01691864.2014.893837
PG 12
WC Robotics
SC Robotics
GA AJ3BZ
UT WOS:000337541400002
DA 2018-01-22
ER

PT J
AU Mohamed, Z
Kitani, M
Capi, G
AF Mohamed, Zulkifli
Kitani, Mitsuki
Capi, Genci
TI Adaptive arm motion generation of humanoid robot operating in dynamic
environments
SO INDUSTRIAL ROBOT-AN INTERNATIONAL JOURNAL
LA English
DT Article
DE Arm motion generation; Mobile humanoid robot; Multi-objective
evolutionary algorithm
AB Purpose - The purpose of this paper is to compare the performance of the robot
arm motion generated by neural controllers in simulated and real robot experiments.
Design/methodology/approach - The arm motion generation is formulated as an
optimization problem. The neural controllers generate the robot arm motion in
dynamic environments optimizing three different objective functions; minimum
execution time, minimum distance and minimum acceleration. In addition, the robot
motion generation in the presence of obstacles is also considered.
Findings - The robot is able to adapt its arm motion generation based on the
specific task, reaching the goal position in simulated and experimental tests. The
same neural controller can be employed to generate the robot motion for a wide
range of initial and goal positions.
Research limitations/implications - The motion generated yield good results in
both simulation and experimental environments.
Practical implications - The robot motion is generated based on three different
objective functions that are simultaneously optimized. Therefore, the humanoid
robot can perform a wide range of tasks in real-life environments, by selecting the
appropriate motion.
Originality/value - A new method for adaptive arm motion generation of a mobile
humanoid robot operating in dynamic human and industrial environments.
C1 [Mohamed, Zulkifli; Kitani, Mitsuki; Capi, Genci] Toyama Univ, Dept Elect &
Elect Syst Engn, Toyama 930, Japan.
[Mohamed, Zulkifli] Univ Teknol MARA, Fac Mech Engn, Shah Alam, Selangor,
Malaysia.
RP Capi, G (reprint author), Toyama Univ, Dept Elect & Elect Syst Engn, Toyama 930,
Japan.
EM capi@eng.u-toyama.ac.jp
CR Albers Albert, 2006, IEEE RAS INT C HUM R, P308
Capi G, 2002, IND ROBOT, V29, P252, DOI 10.1108/01439910210425559
Capi G, 2007, IEEE T ROBOT, V23, P1225, DOI 10.1109/TRO.2007.910773
Coello C. A. C., 1998, KNOWL INF SYST, V1, P210
Deb K., 2009, MULTIOBJECTIVE OPTIM
Dias AHF, 2002, IEEE T MAGN, V38, P1133, DOI 10.1109/20.996290
Hamill J, 2003, BIOMECHANICAL BASIS
Iwata Hiroyasu, 2009, 2009 IEEE International Conference on Robotics and
Automation (ICRA), P580, DOI 10.1109/ROBOT.2009.5152702
Kim J, 2010, IND ROBOT, V37, P51, DOI 10.1108/01439911011009957
Liu Y., 2011, P 2011 6 INT FOR STR, V2, P410
Mohameda Z., 2012, PROCEDIA ENG, V41, P345
Montana DJ, 1989, MACH LEARN, V1, P762
Pires EJS, 2007, APPL SOFT COMPUT, V7, P659, DOI 10.1016/j.asoc.2005.06.009
Ramabalan S., 2008, INT J ADV MANUF TECH, V41, P580
Stuckler J., 2009, 4 EUR C MOB ROB ECMR, P87
Ueda T., 2006, P 32 ANN C IEEE IND, V1, P3058
Ur-Rehman R, 2010, MECH MACH THEORY, V45, P1125, DOI
10.1016/j.mechmachtheory.2010.03.008
Vahrenkamp N., 2008, 2008 IEEE RSJ INT C, P22
Vahrenkamp N., 2011, IEEE INT C ROB AUT, P715
Yu QX, 2012, IND ROBOT, V39, P271, DOI 10.1108/01439911211217107
NR 20
TC 1
Z9 1
U1 0
U2 2
PU EMERALD GROUP PUBLISHING LIMITED
PI BINGLEY
PA HOWARD HOUSE, WAGON LANE, BINGLEY BD16 1WA, W YORKSHIRE, ENGLAND
SN 0143-991X
EI 1758-5791
J9 IND ROBOT
JI Ind. Robot
PY 2014
VL 41
IS 2
BP 124
EP 134
DI 10.1108/IR-10-2013-409
PG 11
WC Engineering, Industrial; Robotics
SC Engineering; Robotics
GA AF5AW
UT WOS:000334727400003
DA 2018-01-22
ER

PT J
AU Kim, MG
Suzuki, K
AF Kim, Min-Gyu
Suzuki, Kenji
TI Comparative Study of Human Behavior in Card Playing with a Humanoid
Playmate
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Human-robot interaction; Social playmate; Poker game; Humanoid
ID POKER; ROBOT; CUES; GAME
AB This paper describes the study of human behaviors in a poker game with the game
playing humanoid robot. Betting decision and nonverbal behaviors of human players
were analyzed between human-human and the human-humanoid poker game. It was found
that card hand strength is related to the betting strategy and nonverbal
interaction. Moreover, engagement in the poker game with the humanoid was assessed
through questionnaire and by measuring the nonverbal behaviors between playtime and
breaktime.
The findings of this study contribute to not only design of socially interactive
game playing robot, but also the theoretical approach on the realization of the
robot that behaves in the way of human doing in game playing.
C1 [Kim, Min-Gyu] Univ Tsukuba, Dept Intelligent Interact Technol, Tsukuba,
Ibaraki, Japan.
[Suzuki, Kenji] Univ Tsukuba, Fac Engn Informat & Syst, Tsukuba, Ibaraki, Japan.
RP Kim, MG (reprint author), Univ Tsukuba, Dept Intelligent Interact Technol,
Tennodai 1-1-1, Tsukuba, Ibaraki, Japan.
EM mingyu@ai.iit.tsukuba.ac.jp; kenji@ieee.org
FU Global COE Program on "Cybernetics: fusion of human, machine, and
information systems" by MEXT, Japan
FX This work is partially supported by Grand-in-Aid for Scientific Research
and Global COE Program on "Cybernetics: fusion of human, machine, and
information systems" by MEXT, Japan.
CR Bainbridge W. A., 2008, 17 IEEE INT S ROB HU
Bainbridge WA, 2011, INT J SOC ROBOT, V3, P41, DOI 10.1007/s12369-010-0082-7
Berne E., 1996, GAMES PEOPLE PLAY BA
Billings D, 1998, FIFTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE
(AAAI-98) AND TENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICAL INTELLIGENCE
(IAAI-98) - PROCEEDINGS, P493
Caro M, 2003, CAROS BOOK POKER TEL
Dantam Neil, 2011, IEEE International Conference on Robotics and Automation,
P5463
DePaulo BM, 2003, PSYCHOL BULL, V129, P74, DOI 10.1037//0033-2909.129.1.74
Duffy BR, 2003, ROBOT AUTON SYST, V42, P177, DOI 10.1016/S0921-8890(02)00374-3
Eckman P, 2003, ANN NY ACAD SCI, V1000, P205
Ekman P., 1985, TELLING LIES CLUES D
FINDLER NV, 1977, COMMUN ACM, V20, P230, DOI 10.1145/359461.363617
Gu E, 2006, LECT NOTES ARTIF INT, V4133, P193
HAYANO DM, 1980, J COMMUN, V30, P113, DOI 10.1111/j.1460-2466.1980.tb01973.x
Kanda T, 2003, INT JOINT C ART INT, V18, P177
KELLEY HH, 1970, J PERS SOC PSYCHOL, V16, P66, DOI 10.1037/h0029849
KELLEY JF, 1984, ACM T OFF INF SYST, V2, P26, DOI 10.1145/357417.357420
Kim MG, 2010, LECT NOTES COMPUT SC, V6243, P9
KING GA, 1983, J PERS SOC PSYCHOL, V44, P140
Kovacs G, 2007, 4 HUNG C COMP GRAPH, P182
Lazzaro N., 2004, GAM DEV C
Lee KM, 2006, INT J HUM-COMPUT ST, V64, P962, DOI 10.1016/j.ijhcs.2006.05.002
Lombard M., 2000, 3 INT WORKSH PRES DE
Mahmud AA, 2010, COMPUT ENTERTAIN, V1, P147
Marquis S, 1994, 12 NAT C ART INT WOR, P11
Moriguchi Y, 2010, J EXP CHILD PSYCHOL, V107, P181, DOI
10.1016/j.jecp.2010.04.018
Mutlu B., 2006, 15 IEEE INT S ROB HU, P74
Pereira A, 2008, P 7 INT JOINT C AUT, P1253
Poels K., 2007, P 2007 C FUT PLAY, P83, DOI DOI 10.1145/1328202.1328218
Tuyls K, 2008, P 20 BELG NETH C ART, P225
Seale DA, 2010, J BEHAV DECIS MAKING, V23, P335, DOI 10.1002/bdm.658
Sidner C. L., 2004, P 9 INT C INT US INT, P78, DOI DOI 10.1145/964442.964458
Siler K, 2010, J GAMBL STUD, V26, P401, DOI 10.1007/s10899-009-9168-2
Sung M, 2005, THESIS MIT MEDIA LAB
Vrij A, 2000, J NONVERBAL BEHAV, V24, P239, DOI 10.1023/A:1006610329284
Wagner AR, 2009, IEEE INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE IN
ROBOTICS AND AUTOMATION, P46
Wallhoff F, 2009, LECT NOTE COMPUTER S, V5618, P108
Walters ML, 2005, IEEE INT WORKSH ROB, P347, DOI 10.1109/ROMAN.2005.1513803
NR 37
TC 2
Z9 2
U1 1
U2 4
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD JAN
PY 2014
VL 6
IS 1
BP 5
EP 15
DI 10.1007/s12369-013-0184-0
PG 11
WC Robotics
SC Robotics
GA AE1VZ
UT WOS:000333760700002
DA 2018-01-22
ER

PT J
AU Kamide, H
Takubo, T
Ohara, K
Mae, Y
Arai, T
AF Kamide, Hiroko
Takubo, Tomohito
Ohara, Kenichi
Mae, Yasushi
Arai, Tatsuo
TI Impressions of Humanoids: The Development of a Measure for Evaluating a
Humanoid
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Psychological evaluation; Humanoid; Psychological scale
ID LIKERT-SCALE; DIMENSIONS; PERCEPTION
AB In this study, we focus on what perspectives people use to perceive a humanoid,
and we explore the basic dimensions of a humanoid evaluation by ordinary people. We
also develop the psychological scale called PERNOD (PERception to humaNOiD), a
humanoid-oriented scale that is based on fundamental dimensions.
Several studies have investigated the psychological evaluation of robots
interaction with humans; however, almost studies have methodological limitations
because few scales for the evaluation of psychological impressions of humanoids
have been developed thus far. This study uses a humanoid of HRP-2 and explores what
are the perspectives to evaluate the robot and investigate importance to develop
humanoid-oriented scale.
With free descriptions, 157 respondents provided impressions of a humanoid
robot, and psychologists categorized the qualitative data into several groups
according to psychological meanings of the impressions. Thereafter, the
descriptions were organized in the form of a Likert scale in order to develop
PERNOD, and 380 additional respondents evaluated a humanoid robot quantitatively
using PERNOD. The result of factor analysis indicated that PERNOD comprises five
basic dimensions for perceiving humanoid robots: Familiarity, Utility, Motion,
Controllability, and Toughness. Some similarities are found in dimensions of
evaluation for both humanoids and humans, and original dimensions are found in
dimensions for humanoids. The reliability of each sub-scale of PERNOD is
statistically high; further, the usability of this scale is discussed.
C1 [Kamide, Hiroko; Mae, Yasushi; Arai, Tatsuo] Osaka Univ, Grad Sch Engn Sci,
Toyonaka, Osaka 5608531, Japan.
[Takubo, Tomohito] Osaka City Univ, Grad Sch Engn, Sumiyoshi Ku, Osaka 5588585,
Japan.
[Ohara, Kenichi] Meijo Univ, Fac Sci & Technol, Nagoya, Aichi 4688502, Japan.
RP Kamide, H (reprint author), Osaka Univ, Grad Sch Engn Sci, 1-3 Machikaneyamacho,
Toyonaka, Osaka 5608531, Japan.
EM kamide@arai-lab.sys.es.osaka-u.ac.jp
FU Global COE Program "Center of Human-Friendly Robotics Based on Cognitive
Neuroscience" of the Ministry of Education, Culture, Sports, Science and
Technology, Japan
FX H. Kamide thanks to Ikuo Daibo, Naoki Kugihara, Yuji Kanemasa, and
Junichi Taniguchi for their collection of some of data used in the main
study. This research was supported by Global COE Program "Center of
Human-Friendly Robotics Based on Cognitive Neuroscience" of the Ministry
of Education, Culture, Sports, Science and Technology, Japan.
CR Bartneck C, 2009, INT J SOC ROBOT, V1, P71, DOI 10.1007/s12369-008-0001-3
BENTLER PM, 1990, PSYCHOL BULL, V107, P238, DOI 10.1037//0033-2909.107.2.238
BENTLER PM, 1980, ANNU REV PSYCHOL, V31, P419, DOI
10.1146/annurev.ps.31.020180.002223
Bruner J. S., 1954, HDB SOCIAL PSYCHOL, V2, P634
CRONBACH LJ, 1955, PSYCHOL BULL, V52, P177, DOI 10.1037/h0044919
Cunninham EG, 2005, IASE ISI SATELLITE P
Cuttance P., 1987, STRUCTURAL MODELING, P241
Fiske ST, 2007, TRENDS COGN SCI, V11, P77, DOI 10.1016/j.tics.2006.11.005
De Silva PRS, 2013, INT J SOC ROBOT, V3, P349
Gockley R., 2006, P 1 ACM SIGCHI SIGAR, P168, DOI [10.1145/1121241.1121274, DOI
10.1145/1121241.1121274]
Hayashi F, 1978, B FACULTY ED NAGOYA, V25, P233
Inoue K, 2005, P SICE ANN C 2005, P3879
Inoue K., 2007, P 13 INT C ADV ROB K, P1005
Joreskog K. G., 1981, LISREL 5 ANAL LINEAR
Joreskog K. G., 1989, LISREL 7 GUIDE PROGR
Kamide H, 2010, P 2010 IEEE RSJ INT, P5830
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
KOMORITA SS, 1963, J SOC PSYCHOL, V61, P327, DOI 10.1080/00224545.1963.9919489
Kushida Kazutaka, 2005, P 4 INT JOINT C AUT, P23
LEVY LH, 1960, J ABNORM SOC PSYCH, V61, P21, DOI 10.1037/h0042208
Likert R., 1932, ARCH PSYCHOLOGY, V140, P44
Lohse M., 2008, J PHYS AGENTS, V2, P21
MATELL MS, 1972, J APPL PSYCHOL, V56, P506, DOI 10.1037/h0033601
MULAIK SA, 1964, J CONSULT PSYCHOL, V28, P506, DOI 10.1037/h0046800
Muto Y., 2009, P IEEE INT S ROB HUM
Nakagawa K, 2013, INT J SOC ROBOT, V5, P5, DOI 10.1007/s12369-012-0141-3
NAKAZATO H, 1976, JPN J PSYCHOL, V47, P139, DOI 10.4992/jjpsy.47.139
Negi S, 2008, 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, VOLS 1 AND 2, P719, DOI 10.1109/ROMAN.2008.4600752
NORMAN WT, 1963, J ABNORM PSYCHOL, V66, P574, DOI 10.1037/h0040291
Osgood C., 1957, MEASUREMENT MEANING
OSGOOD CE, 1969, J PERS SOC PSYCHOL, V12, P194, DOI 10.1037/h0027715
OSGOOD CE, 1964, AM ANTHROPOL, V66, P171
PASSINI FT, 1966, J PERS SOC PSYCHOL, V4, P44, DOI 10.1037/h0023519
Nass C., 1996, MEDIA EQUATION PEOPL
WANG E, 2006, P 1 ACM SIGCHI SIGAR, P180
Zitzewitz J, 2013, INT J SOC ROBOT, V3, P1875
NR 37
TC 4
Z9 4
U1 0
U2 5
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD JAN
PY 2014
VL 6
IS 1
BP 33
EP 44
DI 10.1007/s12369-013-0187-x
PG 12
WC Robotics
SC Robotics
GA AE1VZ
UT WOS:000333760700004
DA 2018-01-22
ER

PT J
AU Hashimoto, K
Hattori, K
Otani, T
Lim, HO
Takanishi, A
AF Hashimoto, Kenji
Hattori, Kentaro
Otani, Takuya
Lim, Hun-Ok
Takanishi, Atsuo
TI Foot Placement Modification for a Biped Humanoid Robot with Narrow Feet
SO SCIENTIFIC WORLD JOURNAL
LA English
DT Article
AB This paper describes a walking stabilization control for a biped humanoid robot
with narrow feet. Most humanoid robots have larger feet than human beings to
maintain their stability during walking. If robot's feet are as narrow as humans,
it is difficult to realize a stable walk by using conventional stabilization
controls. The proposed control modifies a foot placement according to the robot's
attitude angle. If a robot tends to fall down, a foot angle is modified about the
roll axis so that a swing foot contacts the ground horizontally. And a foot-landing
point is also changed laterally to inhibit the robot from falling to the outside.
To reduce a foot-landing impact, a virtual compliance control is applied to the
vertical axis and the roll and pitch axes of the foot. Verification of the proposed
method is conducted through experiments with a biped humanoid robot WABIAN-2R.
WABIAN-2R realized a knee-bended walking with 30 mm breadth feet. Moreover, WABIAN-
2R mounted on a human-like foot mechanism mimicking a human's foot arch structure
realized a stable walking with the knee-stretched, heel-contact, and toe-off
motion.
C1 [Hashimoto, Kenji] Waseda Univ, Waseda Res Inst Sci & Engn, Shinjuku Ku, Tokyo
1620044, Japan.
[Hattori, Kentaro; Otani, Takuya] Waseda Univ, Grad Sch Sci & Engn, Shinjuku Ku,
Tokyo 1620044, Japan.
[Lim, Hun-Ok] Kanagawa Univ, Fac Engn, Kanagawa Ku, Yokohama, Kanagawa 2218686,
Japan.
[Lim, Hun-Ok; Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst, Shinjuku Ku,
Tokyo 1628480, Japan.
[Takanishi, Atsuo] Waseda Univ, Dept Modern Mech Engn, Shinjuku Ku, Tokyo
1628480, Japan.
RP Hashimoto, K (reprint author), Waseda Univ, Waseda Res Inst Sci & Engn, Shinjuku
Ku, 41-304 17 Kikui Cho, Tokyo 1620044, Japan.
EM k-hashimoto@takanishi.mech.waseda.ac.jp
RI Hashimoto, Kenji/M-6442-2017
OI Hashimoto, Kenji/0000-0003-2300-6766
FU RoboSoM project from the European FP7 program [248366]; MEXT/JSPS
KAKENHI [24360099, 25220005]; Global COE Program "Global Robot
Academia," MEXT, Japan; SolidWorks Japan K.K.; QNX Software Systems;
DYDEN Corporation
FX This study was conducted as part of the Research Institute for Science
and Engineering, Waseda University, and as part of the humanoid project
at the Humanoid Robotics Institute, Waseda University. It was also
supported in part by RoboSoM project from the European FP7 program
(Grant agreement no. 248366), MEXT/JSPS KAKENHI (Grant nos. 24360099 and
25220005), Global COE Program "Global Robot Academia," MEXT, Japan,
SolidWorks Japan K.K., QNX Software Systems, and DYDEN Corporation,
which the authors thank for their financial and technical support.
High-Performance Physical Modeling and Simulation software MapleSim used
in this research was provided by Cybernet Systems Co., Ltd. (Vendor:
Waterloo Maple Inc.).
CR Cheng G, 2006, P IEEE RAS INT C HUM, P182
Hashimoto K., 2010, P 2010 IEEE RSJ INT, P2206
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Kaneko K, 2009, P IEEE RAS INT C HUM, P7, DOI DOI 10.1109/ICHR.2009.5379537
Kaneko K, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT
SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P2471, DOI 10.1109/IROS.2008.4650604
Kouchi M., 2005, H16PRO287
MIURA H, 1984, INT J ROBOT RES, V3, P60, DOI 10.1177/027836498400300206
Miura K, 2011, IEEE INT C INT ROBOT, P4428, DOI 10.1109/IROS.2011.6048511
Ogura Y., 2006, P IEEE RSJ INT C INT, P3976
Ogura Y, 2006, IEEE INT CONF ROBOT, P76
Park Ill-Woo, 2005, P IEEE RAS INT C HUM, P321
Pfeiffer F, 2004, INT J HUM ROBOT, V1, P481
Sakagami Y, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2478, DOI 10.1109/IRDS.2002.1041641
Tajima R., 2009, P IEEE INT C ROB AUT, P1571
TOWNSEND MA, 1985, J BIOMECH, V18, P21, DOI 10.1016/0021-9290(85)90042-9
NR 15
TC 3
Z9 3
U1 0
U2 11
PU HINDAWI PUBLISHING CORP
PI NEW YORK
PA 315 MADISON AVE 3RD FLR, STE 3070, NEW YORK, NY 10017 USA
SN 1537-744X
J9 SCI WORLD J
JI Sci. World J.
PY 2014
AR 259570
DI 10.1155/2014/259570
PG 9
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA AA1NZ
UT WOS:000330864300001
OA gold
DA 2018-01-22
ER

PT J
AU Tomimoto, M
AF Tomimoto, Makoto
TI Artificial Sweat for Humanoid Finger
SO JOURNAL OF BIONIC ENGINEERING
LA English
DT Article
DE humanoid finger; artificial sweat; tactile sensation; adhesion; friction
coefficient
ID HUMAN SKIN; FRICTION; PAPER; WATER; TRANSUDATION; LUBRICATION; TEXTILES;
CONTACT
AB To achieve favorable Frictional Tactile Sensation (FTS) for robot and humanoid
fingers, this report investigated the effects of human finger sweat on Friction
Coefficient (FC) and verified the effectiveness of artificial sweat on FTS for a
humanoid finger. The results show that the model sweat (salt and urea water faked
real sweat) increases the FC of the real finger sliding on the high hygroscopic and
rough surface (paper), whereas on the low hygroscopic and smooth surface (PMMA),
the sweat forms a fluid film and decreases FC, restricting severe finger adhesion.
Further, the film formation and capillary adhesion force of sweat were discussed.
The experimental results with the artificial sweats (ethanol and water) and
humanoid finger (silicone rubber skin with tactile sensors) verifies the
effectiveness. The artificial sweat restricts severe adhesion (stick-slip
vibration), and enhances cognitive capability of FTS.
C1 NSSL Ltd, Kita Ku, Kyoto 6038828, Japan.
RP Tomimoto, M (reprint author), NSSL Ltd, Kita Ku, 87-2 Ohkuri, Kyoto 6038828,
Japan.
EM nssl@sirius.ocn.ne.jp
CR Adams MJ, 2007, TRIBOL LETT, V26, P239, DOI 10.1007/s11249-007-9206-0
Adams MJ, 2013, J R SOC INTERFACE, V10, DOI 10.1098/rsif.2012.0467
Agache P., 2004, MEASURING SKIN
Akinli-Kogak S, 1997, THESIS ANKARA U TURK
Andre T, 2010, J NEUROPHYSIOL, V103, P402, DOI 10.1152/jn.00901.2009
Aspler J S, 1993, NORD PULP PAP RES J, V1, P68
BEKEY GA, 1990, DEXTROUS ROBOT HANDS, P136
Bhushan B., 1998, TRIBOLOGY ISSUES OPP
Charlaix E., 2010, HDB NANOPHYSICS PRIN
Czichos H, 1985, P 11 LEEDS LYON S TR, P135
Derler S, 2007, WEAR, V263, P1112, DOI 10.1016/j.wear.2006.11.031
Derler S, 2012, TRIBOL LETT, V45, P1, DOI 10.1007/s11249-011-9854-y
Gerhardt LC, 2008, J R SOC INTERFACE, V5, P1317, DOI 10.1098/rsif.2008.0034
Gerhardt LC, 2008, SKIN RES TECHNOL, V14, P77, DOI 10.1111/j.1600-
0846.2007.00264.x
HOYLAND RW, 1977, PAP TECHNOL IND, V18, P7
HOYLAND RW, 1976, PAP TECHNOL IND, V17, P216
JOHNSON SA, 1993, TRIBOLOGY S, V25, P663, DOI 10.1016/S0167-8922(08)70419-X
Mittal K. L., 2008, CONTACT ANGLE WETTAB
MONTGOMERY DJ, 1959, SOLID STATE PHYS, V9, P139, DOI 10.1016/S0081-
1947(08)60565-2
Muhr A H, 1991, VEHICLE TRIBOLOGY, V18, P195
Pasumarty SM, 2011, TRIBOL LETT, V44, P117, DOI 10.1007/s11249-011-9828-0
SALMINEN PJ, 1988, TAPPI J, V71, P195
Schenck F W J, 1897, OUTLINES HUMAN PHYSL
Skedung L, 2010, TRIBOL LETT, V37, P389, DOI 10.1007/s11249-009-9538-z
Tomimoto M, 2011, P INT TRIB C HIR JAP
Tomimoto M., 2011, P INT S MECH SHANGH, P192
Tomimoto M, 2009, P WORLD TRIB C 4 KYO, P147
Tomimoto M, 2010, P JAST TRIB C FUK JA
Tomimoto M, 2011, TRIBOL INT, V44, P1340, DOI 10.1016/j.triboint.2010.12.004
Tomlinson SE, 2011, TRIBOL LETT, V41, P283, DOI 10.1007/s11249-010-9709-y
TOMOVIC R, 1962, IRE T AUTOM CONTROL, VAC 7, P3, DOI 10.1109/TAC.1962.1105456
Verhoeff J, 1963, PULP PAPER MAG CAN, V64, P509
Warman PH, 2009, J EXP BIOL, V212, P2015, DOI 10.1242/jeb.028977
NR 33
TC 0
Z9 0
U1 0
U2 18
PU SCIENCE PRESS
PI BEIJING
PA 16 DONGHUANGCHENGGEN NORTH ST, BEIJING 100717, PEOPLES R CHINA
SN 1672-6529
J9 J BIONIC ENG
JI J. Bionic Eng.
PD JAN
PY 2014
VL 11
IS 1
BP 98
EP 108
DI 10.1016/s1672-6529(14)60024-X
PG 11
WC Engineering, Multidisciplinary; Materials Science, Biomaterials;
Robotics
SC Engineering; Materials Science; Robotics
GA 294XQ
UT WOS:000330081600010
DA 2018-01-22
ER

PT J
AU Saegusa, R
Metta, G
Sandini, G
Natale, L
AF Saegusa, Ryo
Metta, Giorgio
Sandini, Giulio
Natale, Lorenzo
TI Developmental Perception of the Self and Action
SO IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
LA English
DT Article
DE Action learning; action perception; imitation; manipulation; mirror
neuron; self-perception
ID BODY SCHEMA; PREMOTOR CORTEX; MIRROR NEURONS; HUMANOID ROBOT; IMITATION;
ORGANIZATION; AFFORDANCES; RECOGNITION; PLATFORM; INFANTS
AB This paper describes a developmental framework for action-driven perception in
anthropomorphic robots. The key idea of the framework is that action generation
develops the agent's perception of its own body and actions. Action-driven
development is critical for identifying changing body parts and understanding the
effects of actions in unknown or nonstationary environments. We embedded minimal
knowledge into the robot's cognitive system in the form of motor synergies and
actions to allow motor exploration. The robot voluntarily generates actions and
develops the ability to perceive its own body and the effect that it generates on
the environment. The robot, in addition, can compose this kind of learned
primitives to perform complex actions and characterize them in terms of their
sensory effects. After learning, the robot can recognize manipulative human
behaviors with cross-modal anticipation for recovery of unavailable sensory
modality, and reproduce the recognized actions afterward. We evaluated the proposed
framework in the experiments with a real robot. In the experiments, we achieved
autonomous body identification, learning of fixation, reaching and grasping
actions, and developmental recognition of human actions as well as their
reproduction.
C1 [Saegusa, Ryo] Toyohashi Univ Technol, Ctr Human Robot Symbiosis Res, Toyohashi,
Aichi 4418580, Japan.
[Metta, Giorgio; Natale, Lorenzo] Ist Italiano Tecnol, iCub Facil, I-16163
Genoa, Italy.
[Sandini, Giulio] Ist Italiano Tecnol, Robot Brain & Cognit Sci Dept, I-16163
Genoa, Italy.
RP Saegusa, R (reprint author), Toyohashi Univ Technol, Ctr Human Robot Symbiosis
Res, Toyohashi, Aichi 4418580, Japan.
EM ryos@ieee.org; giorgio.metta@iit.it; giulio.sandini@iit.it;
lorenzo.natale@iit.it
OI Natale, Lorenzo/0000-0002-8777-5233
FU Robotics, Brain and Cognitive Sciences Department, Istituto Italiano di
Tecnologia, Genoa, Italy
FX This work was supported by the Robotics, Brain and Cognitive Sciences
Department, Istituto Italiano di Tecnologia, Genoa, Italy.
CR Calinon S, 2007, IEEE T SYST MAN CY B, V37, P286, DOI 10.1109/TSMCB.2006.886952
Castellini C, 2011, IEEE T AUTON MENT DE, V3, P207, DOI
10.1109/TAMD.2011.2106782
Fitzpatrick P, 2003, PHILOS T R SOC A, V361, P2165, DOI 10.1098/rsta.2003.1251
Fitzpatrick P, 2008, INFANT CHILD DEV, V17, P7, DOI 10.1002/icd.541
Fogassi L, 2005, SCIENCE, V308, P662, DOI 10.1126/science.1106138
Gallese V, 1996, BRAIN, V119, P593, DOI 10.1093/brain/119.2.593
Gibson James, 1986, ECOLOGICAL APPROACH
Griffith S, 2012, IEEE T AUTON MENT DE, V4, P54, DOI 10.1109/TAMD.2011.2157504
Hikita M, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT
SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P2041, DOI 10.1109/IROS.2008.4650948
Hoffmann M, 2010, IEEE T AUTON MENT DE, V2, P304, DOI 10.1109/TAMD.2010.2086454
Iriki A, 1996, NEUROREPORT, V7, P2325
Iriki A, 2001, NEUROSCI RES, V40, P163, DOI 10.1016/S0168-0102(01)00225-5
Itti L, 1998, IEEE T PATTERN ANAL, V20, P1254, DOI 10.1109/34.730558
Kaneko T, 2011, P ROY SOC B-BIOL SCI, V278, P3694, DOI 10.1098/rspb.2011.0611
Kawato M, 1999, CURR OPIN NEUROBIOL, V9, P718, DOI 10.1016/S0959-4388(99)00028-8
KEMPENAAR C, 2006, P 8 INT C CONCR BLOC, P1
Kormushev P., 2010, P IEEE INT C HUM ROB, P417
Lallee S., 2011, P IEEE RSJ INT C IRO, P1
MacQueen J., 1967, P 5 BERK S MATH STAT, V1, P281, DOI DOI 10.1234/12345678
Maravita A, 2004, TRENDS COGN SCI, V8, P79, DOI 10.1016/j.tics.2003.12.008
Metta G, 2006, INTERACT STUD, V7, P197, DOI 10.1075/is.7.2.06met
Metta G, 2010, NEURAL NETWORKS, V23, P1125, DOI 10.1016/j.neunet.2010.08.010
Montesano L, 2008, IEEE T ROBOT, V24, P15, DOI 10.1109/TRO.2007.914848
Natale L., 2004, THESIS U GENOA GENOA
Oztop E, 2002, BIOL CYBERN, V87, P116, DOI 10.1007/s00422-002-0318-1
Oztop E, 2006, NEURAL NETWORKS, V19, P254, DOI 10.1016/j.neunet.2006.02.002
Paine RW, 2004, NEURAL NETWORKS, V17, P1291, DOI 10.1016/j.neunet.2004.08.005
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
Saegusa R, 2012, IEEE T IND ELECTRON, V59, P3199, DOI 10.1109/TIE.2011.2157280
Saegusa R, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P2598, DOI 10.1109/IROS.2009.5354226
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Stoytchev A., 2007, P 7 INT C EP ROB EPI, V135, P165
Tsagarakis NG, 2007, ADV ROBOTICS, V21, P1151, DOI 10.1163/156855307781389419
Ugur E, 2011, ROBOT AUTON SYST, V59, P580, DOI 10.1016/j.robot.2011.04.005
Viswanathan GM, 2008, PHYS LIFE REV, V5, P133, DOI 10.1016/j.plrev.2008.03.002
Watanabe H, 2006, INFANT BEHAV DEV, V29, P402, DOI 10.1016/j.infbeh.2006.02.001
WOLPERT DM, 1995, SCIENCE, V269, P1880, DOI 10.1126/science.7569931
WOLPERT DM, 2001, CURR BIOL, V11, P729, DOI DOI 10.1016/S0960-9822(01)00432-8
Yokoya R, 2007, ADV ROBOTICS, V21, P1351, DOI 10.1163/156855307781746106
NR 39
TC 11
Z9 11
U1 2
U2 11
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 2162-237X
EI 2162-2388
J9 IEEE T NEUR NET LEAR
JI IEEE Trans. Neural Netw. Learn. Syst.
PD JAN
PY 2014
VL 25
IS 1
SI SI
BP 183
EP 202
DI 10.1109/TNNLS.2013.2271793
PG 20
WC Computer Science, Artificial Intelligence; Computer Science, Hardware &
Architecture; Computer Science, Theory & Methods; Engineering,
Electrical & Electronic
SC Computer Science; Engineering
GA 279FT
UT WOS:000328946500015
PM 24806653
DA 2018-01-22
ER

PT J
AU Murata, S
Namikawa, J
Arie, H
Sugano, S
Tani, J
AF Murata, Shingo
Namikawa, Jun
Arie, Hiroaki
Sugano, Shigeki
Tani, Jun
TI Learning to Reproduce Fluctuating Time Series by Inferring Their
Time-Dependent Stochastic Properties: Application in Robot Learning Via
Tutoring
SO IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT
LA English
DT Article
DE Dynamical systems approach; humanoid robot; maximum likelihood
estimation; recurrent neural network
ID RECURRENT NEURAL NETWORKS; DYNAMICAL-SYSTEMS; HUMANOID ROBOT; MODEL;
TASK
AB This study proposes a novel type of dynamic neural network model that can learn
to extract stochastic or fluctuating structures hidden in time series data. The
network learns to predict not only the mean of the next input state, but also its
time-dependent variance. The training method is based on maximum likelihood
estimation by using the gradient descent method and the likelihood function is
expressed as a function of the estimated variance. Regarding the model evaluation,
we present numerical experiments in which training data were generated in different
ways utilizing Gaussian noise. Our analysis showed that the network can predict the
time-dependent variance and the mean and it can also reproduce the target
stochastic sequence data by utilizing the estimated variance. Furthermore, it was
shown that a humanoid robot using the proposed network can learn to reproduce
latent stochastic structures hidden in fluctuating tutoring trajectories. This
learning scheme is essential for the acquisition of sensory-guided skilled
behavior.
C1 [Murata, Shingo; Arie, Hiroaki; Sugano, Shigeki] Waseda Univ, Dept Modern Mech
Engn, Tokyo 1698555, Japan.
[Namikawa, Jun] RIKEN, Brain Sci Inst, Wako, Saitama 3510198, Japan.
[Tani, Jun] Korea Adv Inst Sci & Technol, Dept Elect Engn, Taejon 305701, South
Korea.
RP Murata, S (reprint author), Waseda Univ, Dept Modern Mech Engn, Tokyo 1698555,
Japan.
EM murata@sugano.mech.waseda.ac.jp; jnamika@gmail.com;
arie@sugano.mech.waseda.ac.jp; sugano@waseda.jp; tani1216jp@gmail.com
RI Tani, Jun/H-3681-2012
FU Institute for Infocomm Research, Agency for Science, Technology and
Research (A*STAR), Singapore [AH/OCL/1082/0111/I2R]
FX This study was supported by grants from the Institute for Infocomm
Research, Agency for Science, Technology and Research (A*STAR),
Singapore (AH/OCL/1082/0111/I2R).
CR BEER RD, 1995, ADAPT BEHAV, V3, P469, DOI 10.1177/105971239500300405
Butz M. V., 2007, LNAI
Calinon S, 2007, IEEE T SYST MAN CY B, V37, P286, DOI 10.1109/TSMCB.2006.886952
Calinon S, 2009, ADV ROBOTICS, V23, P2059, DOI 10.1163/016918609X12529294461843
Coates A, 2008, P 25 INT C MACH LEAR, P144, DOI DOI 10.1145/1390156.1390175
DOYA K, 1989, NEURAL NETWORKS, V2, P375, DOI 10.1016/0893-6080(89)90022-1
ELMAN JL, 1990, COGNITIVE SCI, V14, P179, DOI 10.1016/0364-0213(90)90002-E
Friston KJ, 2011, BIOL CYBERN, V104, P137, DOI 10.1007/s00422-011-0424-z
Friston KJ, 2009, TRENDS COGN SCI, V13, P293, DOI 10.1016/j.tics.2009.04.005
FUNAHASHI K, 1993, NEURAL NETWORKS, V6, P801, DOI 10.1016/S0893-6080(05)80125-X
Gibson E. J., 2000, ECOLOGICAL APPROACH
Gribovskaya E, 2011, INT J ROBOT RES, V30, P80, DOI 10.1177/0278364910376251
Ito M, 2006, NEURAL NETWORKS, V19, P323, DOI 10.1016/j.neunet.2006.02.007
JORDAN MI, 1992, COGNITIVE SCI, V16, P307, DOI 10.1016/0364-0213(92)90036-T
Khansari-Zadeh SM, 2011, IEEE T ROBOT, V27, P943, DOI 10.1109/TRO.2011.2159412
MUEHLIG M, 2009, P IEEE INT C ROB AUT, P1177
Namikawa J., 2013, P 3 INT C COGN NEUR
Namikawa J, 2008, NEURAL NETWORKS, V21, P1466, DOI 10.1016/j.neunet.2008.09.005
Nishide S, 2012, IEEE T AUTON MENT DE, V4, P139, DOI 10.1109/TAMD.2011.2177660
Rumelhart D.E., 1986, PARALLEL DISTRIBUTED, V1, P318
Tani J, 1996, IEEE T SYST MAN CY B, V26, P421, DOI 10.1109/3477.499793
Williams RJ, 1989, NEURAL COMPUT, V1, P270, DOI 10.1162/neco.1989.1.2.270
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
Yamashita Y, 2008, PLOS COMPUT BIOL, V4, DOI 10.1371/journal.pcbi.1000220
NR 24
TC 17
Z9 17
U1 0
U2 5
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1943-0604
EI 1943-0612
J9 IEEE T AUTON MENT DE
JI IEEE Trans. Auton. Ment. Dev.
PD DEC
PY 2013
VL 5
IS 4
BP 298
EP 310
DI 10.1109/TAMD.2013.2258019
PG 13
WC Computer Science, Artificial Intelligence; Robotics; Neurosciences
SC Computer Science; Robotics; Neurosciences & Neurology
GA 275UM
UT WOS:000328703300003
DA 2018-01-22
ER

PT J
AU Owaki, D
Osuka, K
Ishiguro, A
AF Owaki, Dai
Osuka, Koichi
Ishiguro, Akio
TI Stabilization mechanism underlying passive dynamic running
SO ADVANCED ROBOTICS
LA English
DT Article
DE passive dynamic running; morphological computation; two-delay feedback
control
ID MORPHOLOGICAL COMPUTATION; LEGGED LOCOMOTION; HUMANOID ROBOT; MODEL;
WALKING; PLANE
AB In this study, we discuss the stabilization mechanism underlying passive dynamic
running (PDR), focusing on the feedback structure in an analytical Poincare map. To
this end, we derived a linearized analytical Poincare map for PDR and analyzed its
stability in terms of linear control theory. Through our theoretical analysis, we
found that an implicit two-delay feedback structure,' which can be seen as a
certain type of two-delay input digital feedback control developed as an artificial
control structure for digital control, is an inherent stabilization mechanism in
PDR appearing from a minimalistic biped model with elastic elements. To the best of
our knowledge, this has yet to be addressed and studied. Our results shed new light
on the principles underlying bipedal locomotion as morphological computation.'
C1 [Owaki, Dai; Ishiguro, Akio] Tohoku Univ, Elect Commun Res Inst, Aoba Ku,
Sendai, Miyagi 9808579, Japan.
[Osuka, Koichi] Osaka Univ, Dept Mech Engn, Suita, Osaka 5650871, Japan.
[Ishiguro, Akio] Japan Sci & Technol Agcy, CREST, Chiyoda Ku, Tokyo 1020075,
Japan.
RP Owaki, D (reprint author), Tohoku Univ, Elect Commun Res Inst, Aoba Ku, 2-1-1
Karahira, Sendai, Miyagi 9808579, Japan.
EM owaki@riec.tohoku.ac.jp
CR Holmes P, 2004, J THEOR BIOL, V232, P315
BLICKHAN R, 1989, J BIOMECH, V22, P1217, DOI 10.1016/0021-9290(89)90224-8
Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Full RJ, 1999, J EXP BIOL, V202, P3325
Garcia M., 1998, ASME, V120, P281, DOI DOI 10.1115/1.2798313
Geyer H, 2006, P R SOC B, V273, P2861, DOI 10.1098/rspb.2006.3637
Ghigliazza RM, 2003, SIAM J APPL DYN SYST, V2, P187, DOI
10.1137/S1111111102408311
Goswami A, 1998, INT J ROBOT RES, V17, P1282, DOI 10.1177/027836499801701202
Grizzle J. W., 1999, Proceedings of the 38th IEEE Conference on Decision and
Control (Cat. No.99CH36304), P3869, DOI 10.1109/CDC.1999.827961
Hauser H, 2012, BIOL CYBERN, V106, P595, DOI 10.1007/s00422-012-0516-4
Hauser H, 2011, BIOL CYBERN, V105, P355, DOI 10.1007/s00422-012-0471-0
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hirata Kentaro, 2011, SICE Journal of Control, Measurement, and System
Integration, V4, P29
Hirata K., 2003, P IEEE C CONTR APPL, P949
Hirose M, 2007, PHILOS T R SOC A, V365, P11, DOI 10.1098/rsta.2006.1917
Hirukawa H, 2004, ROBOT AUTON SYST, V48, P165, DOI 10.1016/j.robot.2004.07.007
Iida F, 2006, ROBOT AUTON SYST, V54, P631, DOI 10.1016/j.robot.2006.03.005
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
MATSUSHITA K, 2005, P 2005 IEEE INT C RO, P2020
MCGEER T, 1990, INT J ROBOT RES, V9, P62, DOI 10.1177/027836499000900206
MCGEER T, 1990, PROC R SOC SER B-BIO, V240, P107, DOI 10.1098/rspb.1990.0030
MCMAHON TA, 1990, J BIOMECH, V23, P65, DOI 10.1016/0021-9290(90)90042-2
MITA T, 1990, IEEE T AUTOMAT CONTR, V35, P962, DOI 10.1109/9.58514
Miyashita S, 2011, INT J ROBOT RES, V30, P627, DOI 10.1177/0278364910393288
Osuka K, 2010, IEEE INT C INT ROBOT, P2407, DOI 10.1109/IROS.2010.5653968
Owaki D., 2009, THESIS TOHOKU U JAPA
Owaki D, 2011, IEEE T ROBOT, V27, P156, DOI 10.1109/TRO.2010.2098610
Paul C, 2006, IEEE T ROBOT, V22, P944, DOI 10.1109/TRO.2006.878980
Paul C, 2006, ROBOT AUTON SYST, V54, P619, DOI 10.1016/j.robot.2006.03.003
Peuker F, 2012, BIOINSPIR BIOMIM, V7, DOI 10.1088/1748-3182/7/3/036002
Pfeifer R., 2006, BODY SHAPES WAY WE T
Pfeifer R., 1999, UNDERSTANDING INTELL
Pfeifer R., 2005, JAPANESE SCI MONTHLY, V58, P48, DOI DOI 10.1007/978-3-642-
00616-6_
Pfeifer R, 2007, SCIENCE, V318, P1088, DOI 10.1126/science.1145803
Raibert M., 1986, LEGGED ROBOT BALANCE
Sakagami Y., 2002, P IEEE RSJ INT C INT, V3, p2478 , DOI DOI
10.1109/IRDS.2002.1041641
Schmitt J, 2002, BIOL CYBERN, V86, P343, DOI 10.1007/s00422-001-0300-3
Schmitt J, 2006, J COMPUT NONLIN DYN, V1, P348, DOI 10.1115/1.2338650
Seipel JE, 2005, INT J ROBOT RES, V24, P657, DOI 10.1177/0278364905056194
Seyfarth A, 2002, J BIOMECH, V35, P649, DOI 10.1016/S0021-9290(01)00245-7
Seyfarth A, 2003, J EXP BIOL, V206, P2547, DOI 10.1242/jeb.00463
Sugimoto Y, 2005, P 12 INT C ADV ROB, P236
Pfeifer R, 2006, P CLAWAR 2006 BELG B, P12
NR 43
TC 1
Z9 1
U1 0
U2 16
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD DEC 1
PY 2013
VL 27
IS 18
BP 1399
EP 1407
DI 10.1080/01691864.2013.839087
PG 9
WC Robotics
SC Robotics
GA 233XK
UT WOS:000325602600002
DA 2018-01-22
ER

PT J
AU Attamimi, M
Araki, T
Nakamura, T
Nagai, T
AF Attamimi, Muhammad
Araki, Takaya
Nakamura, Tomoaki
Nagai, Takayuki
TI Visual Recognition System for Cleaning Tasks by Humanoid Robots
SO INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
LA English
DT Article
DE Object Recognition; Material Recognition; Multiple Cues and Cleaning
Tasks
ID FEATURES
AB In this study, we present a visual recognition system that enables a robot to
clean a tabletop. The proposed system comprises object recognition, material
recognition and ungraspable object detection using information acquired from a
visual sensor. Multiple cues such as colour, texture and three-dimensional point-
clouds are incorporated adaptively for achieving object recognition. Moreover,
near-infrared (NIR) reflection intensities captured by the visual sensor are used
for realizing material recognition. The Gaussian mixture model (GMM) is employed
for modelling the tabletop surface that is used for detecting ungraspable objects.
The proposed system was implemented in a humanoid robot, and tasks such as object
and material recognition were performed in various environments. In addition, we
evaluated ungraspable object detection using various objects such as dust, grains
and paper waste. Finally, we executed the cleaning task to evaluate the proposed
system's performance. The results revealed that the proposed system affords high
recognition rates and enables humanoid robots to perform domestic service tasks
such as cleaning.
C1 [Attamimi, Muhammad; Araki, Takaya; Nakamura, Tomoaki; Nagai, Takayuki] Univ
Electrocommun, Dept Mech Engn & Intelligent Syst, Tokyo, Japan.
RP Attamimi, M (reprint author), Univ Electrocommun, Dept Mech Engn & Intelligent
Syst, Tokyo, Japan.
EM m_att@apple.ee.uec.ac.jp
FU JSPS KAKENHI [24009813]
FX This work was supported by JSPS KAKENHI grant number 24009813.
CR Attamimi M, 2010, IEEE INT C INT ROBOT, P4560, DOI 10.1109/IROS.2010.5650455
Attamimi M, 2010, IEEE INT CONF ROBOT, P745, DOI 10.1109/ROBOT.2010.5509417
BESL PJ, 1992, IEEE T PATTERN ANAL, V14, P239, DOI 10.1109/34.121791
Bohme M, 2010, SHADING CONSTRAINT I
Chang CC, 2011, ACM T INTEL SYST TEC, V2, DOI 10.1145/1961189.1961199
Csurka G., 2004, WORKSH STAT LEARN CO, P1, DOI DOI 10.1234/12345678
Everingham M, 2010, INT J COMPUT VISION, V88, P303, DOI 10.1007/s11263-009-0275-
4
Gorur D, 2010, J COMPUT SCI TECH-CH, V25, P653, DOI [10.1007/s11390-010-1051-1,
10.1007/s11390-010-9355-8]
Harada T, 2009, J ROBOTICS SOC JAPAN, V27, P749
Hess J, 2011, INT C ICRA
Inamura T, 2011, J RSJ, V25, P15
Lai K, 2011, IEEE INT CONF ROBOT, P1817
LaValle S, 1998, RAPIDLY EXPLORING RA
Liu C, 2010, PROC CVPR IEEE, P239, DOI [10.1109/CVPR.2010.5540207,
10.1109/ICCET.2010.5485248]
Lowe DG, 2004, INT J COMPUT VISION, V60, P91, DOI
10.1023/B:VISI.0000029664.99615.94
Murase H, 1998, J81D29 IEIEC, P2035
Nowak E, 2006, LECT NOTES COMPUT SC, V3954, P490
Oggier T, 2006, LECT NOTES ARTIF INT, V4021, P212
Okada K., 2001, IEEE INT C ROB AUT, V2, P2120
Osada R, 2002, ACM T GRAPHIC, V21, P807, DOI 10.1145/571647.571648
Rasmussen CE, 2000, ADV NEUR IN, V12, P554
Rusu RB, 2010, IEEE INT C INT ROBOT, P2155, DOI 10.1109/IROS.2010.5651280
Salamati N, 2009, SEVENTEENTH COLOR IMAGING CONFERENCE - COLOR SCIENCE AND
ENGINEERING SYSTEMS, TECHNOLOGIES, AND APPLICATIONS, P216
Sande K. E. A, 2008, EVALUATION COLOR DES, P1
Sato F, 2011, IEEE INT C INT ROBOT, P3179, DOI 10.1109/IROS.2011.6048182
Sturmer M, 2008, STANDARDIZATION INTE
Vedaldi A., 2010, ACM MULTIMEDIA, P1469
Yamazaki K, 2010, IEEE INT C INT ROBOT, P1365, DOI 10.1109/IROS.2010.5653614
NR 28
TC 0
Z9 0
U1 0
U2 16
PU SAGE PUBLICATIONS INC
PI THOUSAND OAKS
PA 2455 TELLER RD, THOUSAND OAKS, CA 91320 USA
SN 1729-8814
J9 INT J ADV ROBOT SYST
JI Int. J. Adv. Robot. Syst.
PD NOV 8
PY 2013
VL 10
AR 384
DI 10.5772/56629
PG 14
WC Robotics
SC Robotics
GA 248DZ
UT WOS:000326675200001
OA gold
DA 2018-01-22
ER

PT J
AU Trovato, G
Kishi, T
Endo, N
Zecca, M
Hashimoto, K
Takanishi, A
AF Trovato, Gabriele
Kishi, Tatsuhiro
Endo, Nobutsuna
Zecca, Massimiliano
Hashimoto, Kenji
Takanishi, Atsuo
TI Cross-Cultural Perspectives on Emotion Expressive Humanoid Robotic Head:
Recognition of Facial Expressions and Symbols
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Facial expressions; Culture; Emotion expression; Human-Robot
Interaction; Communication
ID DESIGN
AB Emotion display through facial expressions is an important channel of
communication. However, between humans there are differences in the way a meaning
to facial cues is assigned, depending on the background culture. This leads to a
gap in recognition rates of expressions: this problem is present when displaying a
robotic face too, as a robot's facial expression recognition is often hampered by a
cultural divide, and poor scores of recognition rate may lead to poor acceptance
and interaction. It would be desirable if robots could switch their output facial
configuration flexibly, adapting to different cultural backgrounds. To achieve
this, we made a generation system that produces facial expressions and applied it
to the 24 degrees of freedom head of the humanoid social robot KOBIAN-R, and thanks
to the work of illustrators and cartoonists, the system can generate two versions
of the same expression, in order to be easily recognisable by both Japanese and
Western subjects. As a tool for making recognition easier, the display of Japanese
comic symbols on the robotic face has also been introduced and evaluated. In this
work, we conducted a cross-cultural study aimed at assessing this gap in
recognition and finding solutions for it. The investigation was extended to
Egyptian subjects too, as a sample of another different culture. Results confirmed
the differences in recognition rates, the effectiveness of customising expressions,
and the usefulness of symbols display, thereby suggesting that this approach might
be valuable for robots that in the future will interact in a multi-cultural
environment.
C1 [Trovato, Gabriele; Kishi, Tatsuhiro; Endo, Nobutsuna] Waseda Univ, Grad Sch Sci
& Engn, Shinjuku Ku, Tokyo 1620044, Japan.
[Zecca, Massimiliano; Hashimoto, Kenji] Waseda Univ, Fac Sci & Engn, Shinjuku
Ku, Tokyo 1620044, Japan.
[Zecca, Massimiliano; Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst,
Shinjuku Ku, Tokyo 1620044, Japan.
[Takanishi, Atsuo] Waseda Univ, Dept Modern Mech Engn, Shinjuku Ku, Tokyo
1620044, Japan.
RP Trovato, G (reprint author), Waseda Univ, Grad Sch Sci & Engn, Shinjuku Ku, 41-
304,17 Kikui Cho, Tokyo 1620044, Japan.
EM contact@takanishi.mech.waseda.ac.jp
RI Hashimoto, Kenji/M-6442-2017
OI Hashimoto, Kenji/0000-0003-2300-6766; Zecca,
Massimiliano/0000-0003-4741-4334
FU RoboSoM project from the European FP7 program [248366]; GCOE Program
"Global Robot Academia" from the Ministry of Education, Culture, Sports,
Science and Technology of Japan; SolidWorks Japan K.K.; NiKKi Fron Co.;
Chukoh Chemical Industries; STMicroelectronics; DYDEN Corporation
FX This study was conducted as part of the Research Institute for Science
and Engineering, Waseda University, and as part of the humanoid project
at the Humanoid Robotics Institute, Waseda University. It was supported
in part by RoboSoM project from the European FP7 program (Grant
agreement No. 248366), GCOE Program "Global Robot Academia" from the
Ministry of Education, Culture, Sports, Science and Technology of Japan,
SolidWorks Japan K.K., NiKKi Fron Co., Chukoh Chemical Industries,
STMicroelectronics, and DYDEN Corporation, whom we thank for their
financial and technical support.
CR Albirini A, 2006, COMPUT EDUC, V47, P373, DOI 10.1016/j.compedu.2004.10.013
Arras KO, 2005, 0605001 EPFL AUT SYS
ASIMOV Issac, 1978, SCI FICTION CONT MYT
Bartneck C, 2005, P HCI INT LAS VEG US
Bartneck C, 2007, AI SOC, V21, P217, DOI 10.1007/s00146-006-0052-7
Beck A., 2010, P 3 INT WORKSH AFF I
Becker-Asano C., 2011, 2011 IEEE WORKSH AFF, P1, DOI DOI
10.1109/WACI.2011.5953147
Beira R, 2006, IEEE INT CONF ROBOT, P94, DOI 10.1109/ROBOT.2006.1641167
Breazeal C, 2003, INT J HUM-COMPUT ST, V59, P119, DOI 10.1016/S1071-
5819(03)00018-1
Buck R., 1984, COMMUNICATION EMOTIO
Paolo D., 1999, TECHNOLOGY DISABILIT, V10, P77
EKMAN P, 1969, SCIENCE, V164, P86, DOI 10.1126/science.164.3875.86
EKMAN P, 1971, J PERS SOC PSYCHOL, V17, P124, DOI 10.1037/h0030377
Ekman P, 2002, FACIAL ACTION CODING
Elfenbein HA, 2003, CURR DIR PSYCHOL SCI, V12, P159, DOI 10.1111/1467-8721.01252
Endo Nobutsuna, 2011, J ROBOTICS MECHATRON, V23, P969
Evers V., 2008, P 3 ACM IEEE INT C H, P255
Hall E., 1977, CULTURE
Hegel F, 2010, P 19 IEEE INT S ROB, P120, DOI DOI 10.1109/ROMAN.2010.5598691
Hersey George L., 2009, FALLING LOVE STATUES
Hosoi A, 2011, KYARAKUTAA NO YUTAKA
ITOH K, 2006, CISM COUR L, P255
Johnston Ollie, 1995, ILLUSION LIFE DISNEY
Kedzierski J, 2013, INT J SOC ROBOT, V5, P237, DOI 10.1007/s12369-013-0183-1
Kishi T, 2012, IEEE INT C INT ROBOT, P4584, DOI 10.1109/IROS.2012.6386050
Koda T, 2008, WORKSH ENC CONV INT
Latour Bruno, 1991, WE HAVE NEVER BEEN M
Li DJ, 2010, INT J SOC ROBOT, V2, P175, DOI 10.1007/s12369-010-0056-9
Makatchev M, 2013, ACMIEEE INT CONF HUM, P357, DOI 10.1109/HRI.2013.6483610
Masaki Y, 2007, J EXP SOC PSYCHOL, V43, P303
Mazzei D, 2012, P IEEE RAS-EMBS INT, P195
MEHRABIAN A, 1967, J PERS SOC PSYCHOL, V6, P109, DOI 10.1037/h0024532
Morris D., 1994, BODYTALK WORLD GUIDE
Nisbett R. E., 2004, GEOGRAPHY THOUGHT AS
Ogura Y, 2006, IEEE INT CONF ROBOT, P76
Oh J.-H., 2006, P IEEE RSJ INT C INT, P1428, DOI DOI 10.1109/IROS.2006.281935
Plutchik R, 2002, EMOTIONS LIFE PERSPE
Poggi I., 2000, EMBODIED CONVERSATIO
Poggi I, 2006, PAROLE CORPO
Poggi I, 2001, MULTIMODALITY HUMAN
Rau PLP, 2009, COMPUT HUM BEHAV, V25, P587, DOI 10.1016/j.chb.2008.12.025
Raudys S, 1998, PATTERN RECOGN LETT, V19, P385, DOI 10.1016/S0167-8655(98)00016-
6
Riek L, 2010, P 2 INT S NEW FRONT
ROGERS EM, 1995, DIFFUSION INNOVATION, P4
Saif M., 2006, MACALESTER ISLAM J, V1, P53
Saldien J, 2010, INT J SOC ROBOT, V2, P377, DOI 10.1007/s12369-010-0067-6
Schodt Frederick L, 1988, INSIDE ROBOT KINGDOM
SHIMODA K, 1978, EUR J SOC PSYCHOL, V8, P169, DOI 10.1002/ejsp.2420080203
Loch KD, 2003, ADV TOPICS GLOBAL IN, P141
Telotte J. P., 1995, REPLICATIONS ROBOTIC
Thomas RM, 1987, ED TECHNOLOGY ITS CR, P25
Trovato Gabriele, 2012, Social Robotics. 4th International Conference (ICSR
2012). Proceedings, P35, DOI 10.1007/978-3-642-34103-8_4
Trovato G, 2012, P 2012 IEEE RAS INT
Trovato G, 2012, 2012 FIRST INTERNATIONAL CONFERENCE ON INNOVATIVE ENGINEERING
SYSTEMS (ICIES), P129, DOI 10.1109/ICIES.2012.6530858
Van Breemen A., 2005, P 4 INT JOINT C AUT, P143
NR 55
TC 5
Z9 5
U1 0
U2 19
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD NOV
PY 2013
VL 5
IS 4
BP 515
EP 527
DI 10.1007/s12369-013-0213-z
PG 13
WC Robotics
SC Robotics
GA AE1VW
UT WOS:000333760400008
DA 2018-01-22
ER
PT J
AU Yamazaki, T
Igarashi, J
AF Yamazaki, Tadashi
Igarashi, Jun
TI Realtime cerebellum: A large-scale spiking network model of the
cerebellum that runs in realtime using a graphics processing unit
SO NEURAL NETWORKS
LA English
DT Article
DE Graphics processing unit; Realtime simulation; Spiking network model;
Cerebellar microcomplex; Reservoir computing; Robotics
ID LONG-TERM DEPRESSION; CONDITIONED EYELID RESPONSES; MEMORY TRACE;
NEURAL-NETWORK; ORGANIZATION; ACQUISITION; SIMULATION; CORTEX
AB The cerebellum plays an essential role in adaptive motor control. Once we are
able to build a cerebellar model that runs in realtime, which means that a computer
simulation of 1 s in the simulated world completes within 1 s in the real world,
the cerebellar model could be used as a realtime adaptive neural controller for
physical hardware such as humanoid robots. In this paper, we introduce "Realtime
Cerebellum (RC)", a new implementation of our large-scale spiking network model of
the cerebellum, which was originally built to study cerebellar mechanisms for
simultaneous gain and timing control and acted as a general-purpose supervised
learning machine of spatiotemporal information known as reservoir computing, on a
graphics processing unit (GPU). Owing to the massive parallel computing capability
of a GPU, RC runs in realtime, while reproducing qualitatively the same simulation
results of the Pavlovian delay eyeblink conditioning with the previous version. RC
is adopted as a realtime adaptive controller of a humanoid robot, which is
instructed to learn a proper timing to swing a bat to hit a flying ball online.
These results suggest that RC provides a means to apply the computational power of
the cerebellum as a versatile supervised learning machine towards engineering
applications. (C) 2013 Elsevier Ltd. All rights reserved.
C1 [Yamazaki, Tadashi] RIKEN, Brain Sci Inst, Wako, Saitama 3510198, Japan.
[Igarashi, Jun] RIKEN, Computat Sci Res Program, Wako, Saitama 3510198, Japan.
RP Yamazaki, T (reprint author), Univ Electrocommun, Grad Sch Informat & Engn, 1-5-
1 Chofugaoka, Chofu, Tokyo 1828585, Japan.
EM nn12@neuralgorithm.org
FU KAKENHI [20700301, 23300055]
FX We would like to thank Mr. Mamoru Kubota at The University of
Electro-Communications for creating the swing motion of a robot. This
study was supported by KAKENHI (20700301, 23300055).
CR Antonelo EA, 2010, IEEE INT CONF ROBOT, P2959, DOI 10.1109/ROBOT.2010.5509212
BERTHIER NE, 1986, EXP BRAIN RES, V63, P341
Carrillo RR, 2008, BIOSYSTEMS, V94, P18, DOI 10.1016/j.biosystems.2008.05.008
Chandra R., 2000, PARALLEL PROGRAMMING
Christian KM, 2003, LEARN MEMORY, V10, P427, DOI 10.1101/lm.59603
Coesmans M, 2004, NEURON, V44, P691, DOI 10.1016/j.neuron.2004.10.031
Garcia KS, 1998, NEUROPHARMACOLOGY, V37, P471, DOI 10.1016/S0028-3908(98)00055-0
Goodman Dan F M, 2009, Front Neurosci, V3, P192, DOI 10.3389/neuro.01.026.2009
Hofstotter C, 2002, EUR J NEUROSCI, V16, P1361, DOI 10.1046/j.1460-
9568.2002.02182.x
Honda T, 2011, PLOS COMPUT BIOL, V7, DOI 10.1371/journal.pcbi.1002087
Igarashi J, 2011, NEURAL NETWORKS, V24, P950, DOI 10.1016/j.neunet.2011.06.008
Ito M, 2002, NAT REV NEUROSCI, V3, P896, DOI 10.1038/nrn962
Ito M, 2001, PHYSIOL REV, V81, P1143
Ito M., 2011, CEREBELLUM BRAIN IMP
Jaeger H., 2007, ECHO STATE NETWORKS
Jirenhed DA, 2007, J NEUROSCI, V27, P2493, DOI 10.1523/JNEUROSCI.4202-06.2007
KAWATO M, 1992, BIOL CYBERN, V68, P95, DOI 10.1007/BF00201431
Lev-Ram V, 2003, P NATL ACAD SCI USA, V100, P15989, DOI 10.1073/pnas.2636935100
Matsumoto M., 1998, ACM Transactions on Modeling and Computer Simulation, V8,
P3, DOI 10.1145/272991.272995
Mauk MD, 1997, LEARN MEMORY, V4, P130, DOI 10.1101/lm.4.1.130
Miikkulainen R., 2005, COMPUTATIONAL MAPS V
MIYAMOTO H, 1988, NEURAL NETWORKS, V1, P251, DOI 10.1016/0893-6080(88)90030-5
Nageswaran JM, 2009, NEURAL NETWORKS, V22, P791, DOI
10.1016/j.neunet.2009.06.028
NN3, 2007, NEUR FOR COMP, V3
[Anonymous], 2009, NVIDIAS NEXT GEN CUD
NVIDIA, 2011, CUDA PROGR GUID 4 0
Okamoto T, 2011, J NEUROSCI, V31, P8958, DOI 10.1523/JNEUROSCI.1151-11.2011
Pacheco P.S., 1996, PARALLEL PROGRAMMING
Patterson D. A., 2011, COMPUTER ORG DESIGN
PERRETT SP, 1993, J NEUROSCI, V13, P1708
Shibata T, 2001, NEURAL NETWORKS, V14, P201, DOI 10.1016/S0893-6080(00)00084-8
Shutoh F, 2006, NEUROSCIENCE, V139, P767, DOI 10.1016/j.neuroscience.2005.12.035
Skowronski MD, 2007, NEURAL NETWORKS, V20, P414, DOI
10.1016/j.neunet.2007.04.006
Yamazaki T, 2005, NEURAL COMPUT, V17, P1032, DOI 10.1162/0899766053491850
Yamazaki T, 2007, EUR J NEUROSCI, V26, P2279, DOI 10.1111/j.1460-
9568.2007.05837.x
Yamazaki T, 2007, NEURAL NETWORKS, V20, P290, DOI 10.1016/j.neunet.2007.04.004
Yamazaki T, 2012, PLOS ONE, V7, DOI 10.1371/journal.pone.0033319
Yamazaki T, 2009, CEREBELLUM, V8, P423, DOI 10.1007/s12311-009-0115-7
NR 38
TC 28
Z9 28
U1 0
U2 19
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 0893-6080
EI 1879-2782
J9 NEURAL NETWORKS
JI Neural Netw.
PD NOV
PY 2013
VL 47
SI SI
BP 103
EP 111
DI 10.1016/j.neunet.2013.01.019
PG 9
WC Computer Science, Artificial Intelligence; Neurosciences
SC Computer Science; Neurosciences & Neurology
GA 231VP
UT WOS:000325446800014
PM 23434303
OA gold
DA 2018-01-22
ER

PT J
AU Shimmyo, S
Sato, T
Ohnishi, K
AF Shimmyo, Shuhei
Sato, Tomoya
Ohnishi, Kouhei
TI Biped Walking Pattern Generation by Using Preview Control Based on
Three-Mass Model
SO IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
LA English
DT Article
DE Biped robot; preview control; three-mass model; walking pattern
generation; zero-moment point (ZMP)
ID HUMANOID ROBOT; GAIT; MANIPULATION; STABILITY; MOTION; FORCE; POINT;
MOTOR; ZMP
AB Biped walking robots have been widely researched because biped walking is
considered as an effective way to loco-mote in human environments. There are a lot
of ways to generate biped walking patterns, and it is widely known that the model
certainty of the robot and the immediate generation of the trajectory are very
important. Using preview control is a useful way to generate the walking patterns.
By using the preview control method, modeling errors, which are generated by using
a simple model, can be compensated by using the preview controller again in the
conventional approaches. As a matter of fact, it is known that it takes much time
to generate the trajectory by using the preview controller again. In this paper,
biped walking pattern generation by using preview control based on a three-mass
model is proposed. By using the proposed method, the modeling errors can be
decreased without using the preview controller again, which means that the
immediate generation of the trajectory can be achieved. The validity of the
proposed method is confirmed by numerical examples and experimental results.
C1 [Shimmyo, Shuhei; Sato, Tomoya; Ohnishi, Kouhei] Keio Univ, Dept Syst Design
Engn, Yokohama, Kanagawa 2238522, Japan.
RP Shimmyo, S (reprint author), Power & Ind Syst Res & Dev Ctr, Tokyo 1838511,
Japan.
EM shuhei@sum.sd.keio.ac.jp; sat@sum.sd.keio.ac.jp; ohnishi@sd.keio.ac.jp
RI Ohnishi, Kouhei/A-6543-2011
CR Albert A, 2003, J INTELL ROBOT SYST, V36, P109, DOI 10.1023/A:1022600522613
EGAMI T, 1995, IEEE T IND ELECTRON, V42, P494, DOI 10.1109/41.464612
Erbatur K, 2009, IEEE T IND ELECTRON, V56, P835, DOI 10.1109/TIE.2008.2005150
Fu CL, 2008, IEEE T IND ELECTRON, V55, P2111, DOI 10.1109/TIE.2008.921205
Goswami A, 1999, INT J ROBOT RES, V18, P523, DOI 10.1177/02783649922066376
Harada K., 2004, P IEEE RAS INT C HUM, V2, P640, DOI DOI
10.1109/ICHR.2004.1442676
Harada K, 2007, IEEE-ASME T MECH, V12, P53, DOI 10.1109/TMECH.2006.886254
Harada K, 2006, IEEE T ROBOT, V22, P568, DOI 10.1109/TRO.2006.870649
Harada K, 2010, IEEE-ASME T MECH, V15, P694, DOI 10.1109/TMECH.2009.2032180
Hirukawa H, 2006, IEEE INT CONF ROBOT, P1976, DOI 10.1109/ROBOT.2006.1641995
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kajita S., 2006, P IEEE RSJ INT C INT, P2993
Kajita S., 2005, HUMANOID ROBOT
Kubo R, 2009, IEEE T IND ELECTRON, V56, P1364, DOI 10.1109/TIE.2008.2006936
MATSUSHITA A, 1995, IEEE T IND ELECTRON, V42, P50, DOI 10.1109/41.345845
Morisawa M, 2005, IEEE INT CONF ROBOT, P2405
Morisawa M., 2003, P 29 ANN C IEEE IND, P490
Morisawa M., 2000, THESIS KEIO U TOKYO
Motoi N, 2007, IEEE T IND INFORM, V3, P154, DOI 10.1109/TII.2007.898469
Nagasaka K., 1999, P IEEE INT C SYST MA, V6, P908
NISHIKAWA N, 1999, IEEJ T IND APPL, V119, P1507
Nishiwaki K., 2002, P IEEE RSJ INT C INT, V3, P2684
Nishiwaki K., 2007, P IEEE INT C ROB AUT, P2667
OHASHI E, 2004, P 30 ANN C IEEE IND, P117
Ohashi E, 2007, IEEE T IND ELECTRON, V54, P1632, DOI 10.1109/TIE.2007.894728
Ohashi E, 2009, IEEE T IND ELECTRON, V56, P3964, DOI 10.1109/TIE.2009.2024098
Ohnishi K, 1996, IEEE-ASME T MECH, V1, P56, DOI 10.1109/3516.491410
Park JH, 1998, IEEE INT CONF ROBOT, P3528, DOI 10.1109/ROBOT.1998.680985
Perrin N., 2011, P IEEE INT C ROB AUT, P1270
Saida T., 2003, P IEEE INT C ROB AUT, V3, P3815
Sato T., 2009, P IEEE INT C IND TEC, P1
Sato T, 2011, IEEE T IND ELECTRON, V58, P1385, DOI 10.1109/TIE.2010.2050753
Sato T, 2011, IEEE T IND ELECTRON, V58, P376, DOI 10.1109/TIE.2010.2052535
Stasse O, 2009, IEEE T ROBOT, V25, P960, DOI 10.1109/TRO.2009.2020354
Suwanratchatamanee K, 2011, IEEE T IND ELECTRON, V58, P3174, DOI
10.1109/TIE.2009.2030217
Takenaka T, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P1084, DOI 10.1109/IROS.2009.5354662
Takenaka T, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P1601, DOI 10.1109/IROS.2009.5354522
Takenaka T, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P1092, DOI 10.1109/IROS.2009.5354654
Takenaka T, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P1594, DOI 10.1109/IROS.2009.5354542
Tsuji T., 2002, P 7 INT WORKSH ADV M, P478
Ugurlu B, 2010, IEEE T IND ELECTRON, V57, P1701, DOI 10.1109/TIE.2009.2032439
VERRELST B, 2006, P IEEE RAS INT C HUM, P117
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
NR 44
TC 32
Z9 35
U1 1
U2 23
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 0278-0046
J9 IEEE T IND ELECTRON
JI IEEE Trans. Ind. Electron.
PD NOV
PY 2013
VL 60
IS 11
BP 5137
EP 5147
DI 10.1109/TIE.2012.2221111
PG 11
WC Automation & Control Systems; Engineering, Electrical & Electronic;
Instruments & Instrumentation
SC Automation & Control Systems; Engineering; Instruments & Instrumentation
GA 161LQ
UT WOS:000320194800036
DA 2018-01-22
ER

PT J
AU Schiebener, D
Morimoto, J
Asfour, T
Ude, A
AF Schiebener, David
Morimoto, Jun
Asfour, Tamim
Ude, Ales
TI Integrating visual perception and manipulation for autonomous learning
of object representations
SO ADAPTIVE BEHAVIOR
LA English
DT Article
DE Humanoid robotics; developmental robotics; object perception; active
vision
ID RECOGNITION; SEGMENTATION; INFANTS; ORIENTATION; VISION; ROBOTS; MOTION;
UNITY
AB Humans can effortlessly perceive an object they encounter for the first time in
a possibly cluttered scene and memorize its appearance for later recognition. Such
performance is still difficult to achieve with artificial vision systems because it
is not clear how to define the concept of objectness in its full generality. In
this paper we propose a paradigm that integrates the robot's manipulation and
sensing capabilities to detect a new, previously unknown object and learn its
visual appearance. By making use of the robot's manipulation capabilities and force
sensing, we introduce additional information that can be utilized to reliably
separate unknown objects from the background. Once an object has been identified,
the robot can continuously manipulate it to accumulate more information about it
and learn its complete visual appearance. We demonstrate the feasibility of the
proposed approach by applying it to the problem of autonomous learning of visual
representations for viewpoint-independent object recognition on a humanoid robot.
C1 [Schiebener, David; Ude, Ales] Jozef Stefan Inst, Ljubljana, Slovenia.
[Schiebener, David] Karlsruhe Inst Technol, D-76021 Karlsruhe, Germany.
[Morimoto, Jun] ATR Computat Neurosci Labs, Dept Brain Robot Interface, Kyoto,
Japan.
[Asfour, Tamim] Karlsruhe Inst Technol, Inst Anthropomat, D-76021 Karlsruhe,
Germany.
[Ude, Ales] Jozef Stefan Inst, Dept Automat Biocybernet & Robot, Humanoid &
Cognit Robot Lab, Ljubljana, Slovenia.
[Ude, Ales] ATR Computat Neurosci Labs, Kyoto, Japan.
RP Ude, A (reprint author), Jozef Stefan Inst, Ljubljana, Slovenia.
EM ales.ude@ijs.si
OI Ude, Ales/0000-0003-3677-3972
FU EU Seventh Framework Programme [270273]; Xperience; SRBPS; MEXT;
Ministry of Internal Affairs and Communications; MEXT KAKENHI
[23120004]; Strategic International Cooperative Program of JST; NICT
within JAPAN TRUST International Research Cooperation Program
FX This work was supported by the EU Seventh Framework Programme (grant
agreement number 270273), Xperience, SRBPS, MEXT, the Ministry of
Internal Affairs and Communications (contract "Novel and innovative R&D
making use of brain structures''), MEXT KAKENHI (grant number 23120004)
and by the Strategic International Cooperative Program of JST. A. Ude
would also like to thank NICT for its support within the JAPAN TRUST
International Research Cooperation Program.
CR Asfour T, 2006, IEEE-RAS INT C HUMAN, P169, DOI 10.1109/ICHR.2006.321380
Beder C, 2006, LECT NOTES COMPUT SC, V3951, P135
Browatzki B, 2012, IEEE INT CONF ROBOT, P2021, DOI 10.1109/ICRA.2012.6225218
Chang L, 2012, IEEE INT CONF ROBOT, P3875, DOI 10.1109/ICRA.2012.6224575
Goulette F., 2001, P VIS MOD VIS C
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Crammer K., 2001, J MACHINE LEARNING R, V2, P2001
Csurka G., 2004, P ECCV INT WORKSH ST
Feldman J, 2003, TRENDS COGN SCI, V7, P252, DOI 10.1016/S1364-6613(03)00111-6
FISCHLER MA, 1981, COMMUN ACM, V24, P381, DOI 10.1145/358669.358692
Fitzpatrick P, 2003, PHILOS T R SOC A, V361, P2165, DOI 10.1098/rsta.2003.1251
Fitzpatrick P, 2008, INFANT CHILD DEV, V17, P7, DOI 10.1002/icd.541
Foissotte T, 2010, INT J HUM ROBOT, V7, P407, DOI 10.1142/S0219843610002246
Forssen P., 2007, MAXIMALLY STABLE COL
Gibson J. J., 1979, ECOLOGICAL APPROACH
Gupta M, 2012, IEEE INT CONF ROBOT, P3883, DOI 10.1109/ICRA.2012.6224787
Harris C., 1988, COMBINED CORNER EDGE
HORN BKP, 1987, J OPT SOC AM A, V4, P629, DOI 10.1364/JOSAA.4.000629
Johnson SP, 1996, COGNITIVE DEV, V11, P161, DOI 10.1016/S0885-2014(96)90001-5
Juttner M, 2006, BEHAV BRAIN RES, V175, P420, DOI 10.1016/j.bbr.2006.09.005
Katz D, 2008, IEEE INT CONF ROBOT, P272, DOI 10.1109/ROBOT.2008.4543220
Kellman P. J., 1993, VISUAL PERCEPTION CO, P121
Kenney J., 2009, INT C ROB AUT ICRA, P1377
Kraebel KS, 2007, DEV PSYCHOBIOL, V49, P406, DOI 10.1002/dev.20222
Kraft D, 2008, INT J HUM ROBOT, V5, P247, DOI 10.1142/S021984360800139X
Krainin M, 2011, INT J ROBOT RES, V30, P1311, DOI 10.1177/0278364911403178
Kuzmic Eva Stergarsek, 2010, 2010 10th IEEE-RAS International Conference on
Humanoid Robots (Humanoids 2010), P371, DOI 10.1109/ICHR.2010.5686266
Li FF, 2006, IEEE T PATTERN ANAL, V28, P594, DOI 10.1109/TPAMI.2006.79
Li WH, 2011, INT J ROBOT RES, V30, P1124, DOI 10.1177/0278364910395417
Lowe D. G., 1999, Proceedings of the Seventh IEEE International Conference on
Computer Vision, P1150, DOI 10.1109/ICCV.1999.790410
Matas J, 2004, IMAGE VISION COMPUT, V22, P761, DOI 10.1016/j.imavis.2004.02.006
Metta G, 2003, ADAPT BEHAV, V11, P109, DOI 10.1177/10597123030112004
Moreels P, 2007, INT J COMPUT VISION, V73, P263, DOI 10.1007/s11263-006-9967-1
Nishimura M., 2009, F1000 BIOL REPORTS, V1
PALMER S, 1994, PSYCHON B REV, V1, P29, DOI 10.3758/BF03200760
Peissig JJ, 2007, ANNU REV PSYCHOL, V58, P75, DOI
10.1146/annurev.psych.58.102904.190114
Pelleg D., 2000, ICML, V17, P727
Ramisa A, 2008, LECT NOTES COMPUT SC, V5008, P353
Roth V, 2006, LECT NOTES COMPUT SC, V4174, P11
Schaal S, 2005, SPR TRA ADV ROBOT, V15, P561
Schiebener David, 2011, 2011 11th IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2011), P500, DOI 10.1109/Humanoids.2011.6100843
Shibata T, 2001, ADAPT BEHAV, V9, P189, DOI 10.1177/10597123010093005
Smith WC, 2003, COGNITIVE PSYCHOL, V46, P31, DOI 10.1016/S0010-0285(02)00501-7
SPELKE ES, 1990, COGNITIVE SCI, V14, P29, DOI 10.1207/s15516709cog1401_3
TSIKOS CJ, 1991, IEEE T ROBOTIC AUTOM, V7, P306, DOI 10.1109/70.88140
Ude A., 2009, INT C ADV ROB ICAR M
Ude A, 2008, INT J HUM ROBOT, V5, P267, DOI 10.1142/S0219843608001406
Vahrenkamp N, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P2464, DOI 10.1109/IROS.2009.5354625
Welke K, 2010, IEEE INT CONF ROBOT, P2012, DOI 10.1109/ROBOT.2010.5509328
NR 49
TC 14
Z9 14
U1 0
U2 10
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 1059-7123
EI 1741-2633
J9 ADAPT BEHAV
JI Adapt. Behav.
PD OCT
PY 2013
VL 21
IS 5
BP 328
EP 345
DI 10.1177/1059712313484502
PG 18
WC Computer Science, Artificial Intelligence; Psychology, Experimental;
Social Sciences, Interdisciplinary
SC Computer Science; Psychology; Social Sciences - Other Topics
GA 301GI
UT WOS:000330520400002
DA 2018-01-22
ER

PT J
AU Ho, VA
Makikawa, M
Hirai, S
AF Van Anh Ho
Makikawa, Masaaki
Hirai, Shinichi
TI Flexible Fabric Sensor Toward a Humanoid Robot's Skin: Fabrication,
Characterization, and Perceptions
SO IEEE SENSORS JOURNAL
LA English
DT Article
DE Fabric sensor; slip detection; localized displacement; wavelet; texture
recognition
ID TACTILE; TEXTURE
AB We have developed a fabric sensor knitted of tension-sensitive electro-
conductive yarns. Each yarn has an elastic core, around which is wound two other
separate, tension-sensitive electro-conductive threads, making this sensor
inherently flexible and stretchable and allowing it to conform to any complicated
surface on a robot, acting as a robotic skin. The pile-shaped surface of the sensor
enhances its ability to detect tangential traction, while also enabling it to sense
a normal load. Our aim is to use this sensor in applications involving relative
sliding between its surface and a touched object, such as contact recognition, slip
detection, and surface identification through a sliding motion. We carefully
analyzed the static and dynamic characteristics of this sensor while varying the
load and stretching force to fully understand its response and determine its degree
of flexibility and stretchability. We found that a discrete wavelet transformation
may be used to indicate stick/slip states while the sensor is sliding over
surfaces. This method was then used to detect slippage events acting on the
sensor's surface, and to decode textures in a classification test using an
artificial neural network. Because of its flexibility and sensitivity, this sensor
can be used widely as a robotic skin in humanoid robots.
C1 [Van Anh Ho; Makikawa, Masaaki; Hirai, Shinichi] Ritsumeikan Univ, Dept Robot,
Kusatsu 5250072, Japan.
RP Ho, VA (reprint author), Ritsumeikan Univ, Dept Robot, Kusatsu 5250072, Japan.
EM hoanhvan@gmail.com; makikawa@se.ritsumei.ac.jp; hirai@se.ritsumei.ac.jp
FU JSPS, Japan [2324604]
FX This work was supported in part by Grant-in-Aid for Scientific Research
under Grant 2324604, JSPS, Japan. The associate editor coordinating the
review of this paper and approving it for publication was Dr. Ravinder
S. Dahiya.
CR Alirezaei H, 2009, 3DUI : IEEE SYMPOSIUM ON 3D USER INTERFACES 2009,
PROCEEDINGS, P87, DOI 10.1109/3DUI.2009.4811210
Ho VA, 2011, IEEE T ROBOT, V27, P411, DOI 10.1109/TRO.2010.2103470
Buscher G., 2012, P IEEE INT C IROS OC
Dahiya RS, 2010, IEEE T ROBOT, V26, P1, DOI 10.1109/TRO.2009.2033627
Damian DD, 2010, IEEE INT C INT ROBOT, P904, DOI 10.1109/IROS.2010.5652094
De Rossi D, 2010, IEEE ENG MED BIOL, V29, P37, DOI 10.1109/MEMB.2010.936555
de Boissieu F., 2009, P ROB SYST SCI JUN
Engel J., 2003, P IEEE RSJ INT C ROB, V3, P2359
Giguere P, 2011, IEEE T ROBOT, V27, P534, DOI 10.1109/TRO.2011.2119910
Gray R., 1990, ENTROPY INFORM THEOR
Guo L, 2009, WORLD SUMMIT ON GENETIC AND EVOLUTIONARY COMPUTATION (GEC 09), P177
Hasegawa Y., 2007, P T INT SOL STAT SEN, P1453
Ho V. A., 2012, P IEEE INT CASE AUG, P461
Ho V. A., 2011, P ROB SCI SYST JUN
Ho V. A., 2013, P ROB SCI SYST JUN
Hosoda K, 2006, ROBOT AUTON SYST, V54, P104, DOI 10.1016/j.robot.2005.09.019
Jamali N, 2011, IEEE T ROBOT, V27, P508, DOI 10.1109/TRO.2011.2127110
Kidono K., 2004, P 22 ANN C ROB SOC J
Lorussi F, 2013, IEEE SENS J, V13, P217, DOI 10.1109/JSEN.2012.2211099
Lumelsky V., 2001, IEEE SENS J, V1, P37
Mayer J., 2006, P IEEE INT S WEAR CO, P69
Mindlin R., 1949, ASME J APPL MECH, V16, P259
Mitchell T. M., 1997, MACHINE LEARNING
Mittendorfer P, 2011, IEEE T ROBOT, V27, P401, DOI 10.1109/TRO.2011.2106330
Morley J. W., 1998, NEURAL APECTS TACTIL
Mukaibo Y, 2005, IEEE INT CONF ROBOT, P2565
Nitta Corp, 2013, I SCAN PRESS DISTR S
Oddo CM, 2011, IEEE T ROBOT, V27, P522, DOI 10.1109/TRO.2011.2116930
Pacelli M., 2001, P FIBR TEXT FUT AUG
Papakostas T., 2002, P IEEE SENS, V2, P1152
Paradiso R, 2003, ENG MED BIOL SOC ANN, P283
Pezzementi Z, 2011, IEEE T ROBOT, V27, P473, DOI 10.1109/TRO.2011.2125350
Phan S, 2011, IEEE INT C INT ROBOT, P2992, DOI 10.1109/IROS.2011.6048844
Schmitz A, 2011, IEEE T ROBOT, V27, P389, DOI 10.1109/TRO.2011.2132930
SON JS, 1995, IROS '95 - 1995 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT
ROBOTS AND SYSTEMS: HUMAN ROBOT INTERACTION AND COOPERATIVE ROBOTS, PROCEEDINGS,
VOL 2, P96, DOI 10.1109/IROS.1995.526145
Tawil DS, 2011, IEEE T ROBOT, V27, P425, DOI 10.1109/TRO.2011.2125310
Teshigawara S, 2010, IEEE INT CONF ROBOT, P4867, DOI 10.1109/ROBOT.2010.5509288
Timoshenko S.P., 1970, THEORY ELASTICITY
Weeks M., 2007, DIGITAL SIGNAL PROCE
WESTLING G, 1987, EXP BRAIN RES, V66, P128
NR 40
TC 8
Z9 8
U1 3
U2 45
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1530-437X
EI 1558-1748
J9 IEEE SENS J
JI IEEE Sens. J.
PD OCT
PY 2013
VL 13
IS 10
BP 4065
EP 4080
DI 10.1109/JSEN.2013.2272336
PG 16
WC Engineering, Electrical & Electronic; Instruments & Instrumentation;
Physics, Applied
SC Engineering; Instruments & Instrumentation; Physics
GA 219HR
UT WOS:000324497400011
DA 2018-01-22
ER

PT J
AU Perrin, N
Stasse, O
Lamiraux, F
Yoshida, E
AF Perrin, Nicolas
Stasse, Olivier
Lamiraux, Florent
Yoshida, Eiichi
TI Humanoid motion generation and swept volumes: theoretical bounds for
safe steps
SO ADVANCED ROBOTICS
LA English
DT Article
DE humanoid robots; biped locomotion; collision checking; sensitivity
analysis; footstep planning
AB In the context of humanoid robot footstep planning based on a continuous action
set, we conduct an analysis of the sensitivity of a walking pattern generator.
Given a variation of an input vector, we calculate a bound on the variation of the
volume swept by the robot lower body during the corresponding actions (steps).
Since the input vector depends on real parameters, there is an infinite number of
possible steps, but the calculated bound permits a sound and safe use of only a
finite number of swept volumes to account for all the possible motions. After
numerical evaluations, we discuss potential applications.
C1 [Perrin, Nicolas] Ist Italiano Tecnol, Dept Adv Robot, I-16163 Genoa, Italy.
[Stasse, Olivier; Lamiraux, Florent] Univ Toulouse UPS, CNRS LAAS, INSA,
INP,ISAE, F-31077 Toulouse, France.
[Yoshida, Eiichi] UMI3218 CRT, CNRS AIST Joint Robot Lab, Tsukuba, Ibaraki,
Japan.
RP Perrin, N (reprint author), Ist Italiano Tecnol, Dept Adv Robot, Via Morego 30,
I-16163 Genoa, Italy.
EM nicolas.perrin@iit.it
RI Yoshida, Eiichi/M-3756-2016
OI Yoshida, Eiichi/0000-0002-3077-6964
FU RBLINK Project [ANR-08-JCJC-0075-01]
FX This work was partially supported by a grant from the RBLINK Project,
Contract ANR-08-JCJC-0075-01.
CR Ayaz Y., 2006, IEEE RSJ INT C INT R
Beatty M. F., 1986, PRINCIPLES ENG MECH
Bourgeot J-M, 2002, IEEE INT C INT ROB S, P2509
CHESNUTT J, 2005, INT C ROB AUT BARC S, P631
CHESTNUTT J, 2003, IEEE INT C HUM ROB H
Chestnutt J, 2010, IEEE INT C HUM ROB H
Chestnutt J, 2006, IEEE INT CONF ROBOT, P860, DOI 10.1109/ROBOT.2006.1641817
Chestnutt J, 2007, IEEE-RAS INT C HUMAN, P196, DOI 10.1109/ICHR.2007.4813868
Elmogy M, 2009, IEEE RSJ INT C INT R
Gutmann JS, 2005, 19TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
(IJCAI-05), P1232
Hasegawa T, 2003, 5 IEEE INT S ASS TAS
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kim YJ, 2003, 8 ACM S SOL MOD APPL, P11
Kuffner JJ, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P500, DOI
10.1109/IROS.2001.973406
Perrin N, 2012, IEEE T ROBOT, V28, P427, DOI 10.1109/TRO.2011.2172152
Perrin N, 2011, IEEE INT CONF ROBOT, P1270
Schwarzer F, 2002, 5 WORKSH ALG FDN ROB
Siciliano B, 2008, SPRINGER HDB ROBOTIC
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
YOSHIDA E, 2005, IEEE RAS INT C HUM R, P1
NR 20
TC 0
Z9 0
U1 0
U2 7
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
J9 ADV ROBOTICS
JI Adv. Robot.
PD OCT 1
PY 2013
VL 27
IS 14
BP 1045
EP 1058
DI 10.1080/01691864.2013.805468
PG 14
WC Robotics
SC Robotics
GA 200RZ
UT WOS:000323087900001
DA 2018-01-22
ER

PT J
AU Lengagne, S
Vaillant, J
Yoshida, E
Kheddar, A
AF Lengagne, Sebastien
Vaillant, Joris
Yoshida, Eiichi
Kheddar, Abderrahmane
TI Generation of whole-body optimal dynamic multi-contact motions
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE multi-contact motions; optimization; balance; Taylor series
approximation; time-interval discretization; humanoid robots
ID HUMANOID ROBOT; POINT; MANIPULATORS; ALGORITHMS; MOVEMENT
AB We propose a method to plan optimal whole-body dynamic motion in multi-contact
non-gaited transitions. Using a B-spline time parameterization for the active
joints, we turn the motion-planning problem into a semi-infinite programming
formulation that is solved by nonlinear optimization techniques. Our main
contribution lies in producing constraint-satisfaction guaranteed motions for any
time grid. Indeed, we use Taylor series expansion to approximate the dynamic and
kinematic models over fixed successive time intervals, and transform the problem
(constraints and cost functions) into time polynomials which coefficients are
function of the optimization variables. The evaluation of the constraints turns
then into computation of extrema (over each time interval) that are given to the
solver. We also account for collisions and self-collisions constraints that have
not a closed-form expression over the time. We address the problem of the balance
within the optimization problem and demonstrate that generating whole-body multi-
contact dynamic motion for complex tasks is possible and can be tractable, although
still time consuming. We discuss thoroughly the planning of a sitting motion with
the HRP-2 humanoid robot and assess our method with several other complex
scenarios.
C1 [Lengagne, Sebastien; Vaillant, Joris; Yoshida, Eiichi; Kheddar, Abderrahmane]
CNRS AIST Joint Robot Lab JRL, CRT UMI3218, Tsukuba, Ibaraki, Japan.
[Lengagne, Sebastien] Karlsruhe Inst Technol, Inst Anthropomat, Humanoids &
Intelligence Syst Lab, D-76131 Karlsruhe, Germany.
[Vaillant, Joris; Kheddar, Abderrahmane] Univ Montpellier 2, CNRS, LIRMM,
Interact Digital Human Grp, Montpellier, France.
RP Lengagne, S (reprint author), Karlsruhe Inst Technol, Inst Anthropomat,
Humanoids & Intelligence Syst Lab, Adenauerring 2, D-76131 Karlsruhe, Germany.
EM lengagne@gmail.com
RI Yoshida, Eiichi/M-3756-2016
OI Yoshida, Eiichi/0000-0002-3077-6964
FU German Research Foundation (DFG: Deutsche Forschungsgemeinschaft); Japan
Society for the Promotion of Science (JSPS) [P09809, 22300071];
RoboHow.Cog
FX This work is partially supported by grants from the German Research
Foundation (DFG: Deutsche Forschungsgemeinschaft), from the Japan
Society for the Promotion of Science (JSPS; Grant-in-Aid for JSPS
Fellows P09809 and for Scientific Research (B), 22300071, 2010), and
from the RoboHow.Cog FP7 (see http://www.robohow.eu).
CR Arisumi H, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P668, DOI
10.1109/IROS.2008.4651195
Benallegue M., 2009, IEEE INT C ROB AUT, P483
Berz M., 1998, Reliable Computing, V4, P83, DOI 10.1023/A:1009958918582
BOBROW JE, 1988, IEEE T ROBOTIC AUTOM, V4, P443, DOI 10.1109/56.811
Bouyarmane K., 2009, IEEE INT C ROB AUT
Bouyarmane K, 2012, ADV ROBOTICS, V26, P1099, DOI 10.1080/01691864.2012.686345
Breteler MDK, 2001, BIOL CYBERN, V85, P65
Cohen O, 2012, IEEE RAS EMBS INT C
de Alwis T, 2004, P 9 AS TECHN C MATH, P88
De Boor C., 1978, PRATICAL GUIDE SPLIN, V27
Diehl M, 2006, LECT NOTES CONTR INF, V340, P65
Escande A, 2007, IEEE RAS 7 INT C HUM
Escande A, 2008, SPRINGER TRACTS ADV, V54, P293
Escande A., 2009, IEEE RSJ INT C INT R
Escande A, 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots
and Systems, Vols 1-12, P2974, DOI 10.1109/IROS.2006.282154
Gill P.R., 1982, PRACTICAL OPTIMIZATI
Goswami A, 1999, INT J ROBOT RES, V18, P523, DOI 10.1177/02783649922066376
Harada K, 2006, IEEE T ROBOT, V22, P568, DOI 10.1109/TRO.2006.870649
Hauser K, 2005, IEEE-RAS INT C HUMAN, P7
Hauser K, 2010, INT J ROBOT RES, V29, P897, DOI 10.1177/0278364909352098
HETTICH R, 1993, SIAM REV, V35, P380, DOI 10.1137/1035089
Hirukawa H, 2006, IEEE INT CONF ROBOT, P1976, DOI 10.1109/ROBOT.2006.1641995
Hyon SH, 2007, IEEE T ROBOT, V23, P884, DOI 10.1109/TRO.2007.904896
Kajita S, 2005, IEEE INT CONF ROBOT, P616
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Khalil W., 2002, MODELING IDENTIFICAT
Lee SH, 2005, IEEE T ROBOT, V21, P657, DOI 10.1109/TRO.2004.842336
LEE SH, 2012, J AUTONOMOUS ROBOTS, V33, P399
Lee Y, 2012, IEEE RSJ INT C INT R
Lengagne S, 2010, IEEE RAS INT C HUM R
Lengagne S., 2011, IEEE INT C ROB BIOM
Lengagne S, 2012, IEEE RAS EMBS INT C
Lengagne S., 2011, WORKSH HUM SERV ROB
Lengagne S., 2010, IEEE RSJ INT C INT R
Lengagne S, 2011, IEEE T ROBOT, V27, P1095, DOI 10.1109/TRO.2011.2162998
Miossec S, 2006, 2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS,
VOLS 1-3, P299, DOI 10.1109/ROBIO.2006.340170
Mombaur KD, 2005, ZAMM-Z ANGEW MATH ME, V85, P499, DOI 10.1002/zamm.200310190
PARK FC, 1995, INT J ROBOT RES, V14, P609, DOI 10.1177/027836499501400606
Piazzi A, 2000, IEEE T IND ELECTRON, V47, P140, DOI 10.1109/41.824136
Piazzi A, 1998, INT J CONTROL, V71, P631, DOI 10.1080/002071798221713
Reemtsen R., 1998, NONCONVEX OPTIMIZATI
Righetti L, 2011, IEEE RAS INT C HUM R, P318
Schultz G, 2010, IEEE-ASME T MECH, V15, P783, DOI 10.1109/TMECH.2009.2035112
Sentis L, 2010, IEEE T ROBOT, V26, P483, DOI 10.1109/TRO.2010.2043757
Steinbach MC, 1997, TECHNICAL REPORT
Stephens BJ, 2010, IEEE INT C INT ROBOT, P1248, DOI 10.1109/IROS.2010.5648837
UNO Y, 1989, BIOL CYBERN, V61, P89
VONSTRYK O, 1993, INT S NUM M, P129
von Stryk O., 1998, ZAMM-Z ANGEW MATH ME, V78, P1117
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Wachter A, 2006, MATH PROGRAM, V106, P25, DOI 10.1007/s10107-004-0559-y
Zhang XY, 2007, ACM T GRAPHIC, V26, DOI 10.1145/1239451.1239466
NR 53
TC 36
Z9 36
U1 1
U2 13
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD AUG
PY 2013
VL 32
IS 9-10
SI SI
BP 1104
EP 1119
DI 10.1177/0278364913478990
PG 16
WC Robotics
SC Robotics
GA 217ZZ
UT WOS:000324403500008
DA 2018-01-22
ER

PT J
AU Miura, K
Kanehiro, F
Kaneko, K
Kajita, S
Yokoi, K
AF Miura, Kanako
Kanehiro, Fumio
Kaneko, Kenji
Kajita, Shuuji
Yokoi, Kazuhito
TI Slip-Turn for Biped Robots
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Friction; humanoid; slip; turn
ID MANIPULATION; LOCOMOTION; FRICTION; WALKING
AB This paper presents a method for generating a turning motion in a humanoid robot
by allowing the feet of the robot to slip on the ground. As humans, we exploit the
fact that our feet can slip on the ground, and allowing humanoid robots to realize
this same motion is a worthwhile study. In this paper, we propose the hypothesis
that a turning motion is caused by the effect of minimizing the power generated by
floor friction. A model of rotation from this friction is then described based on
our hypothesis. The proposed model suggests that only the trajectory and shape of
the robot's feet determine the amount of rotation from a slip, and that the
friction coefficient between the floor and the soles of the robot, as well as the
velocity of the feet, do not affect the resultant angle. Verification is conducted
through an experiment with a humanoid robot known as HRP-2. Next, to obtain the
foot motion necessary to realize the desired rotational angle, the inverse problem
is solved by confining the trajectory of the center of the foot to an arc shape.
This solution is verified through an experiment with another humanoid robot, HRP-
4C.
C1 [Miura, Kanako; Kanehiro, Fumio; Kaneko, Kenji; Kajita, Shuuji; Yokoi, Kazuhito]
Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res Inst, Tsukuba, Ibaraki
3058568, Japan.
RP Miura, K (reprint author), Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res
Inst, Tsukuba, Ibaraki 3058568, Japan.
EM kanako.miura@aist.go.jp; f-kanehiro@aist.go.jp; k.kaneko@aist.go.jp;
s.kajita@aist.go.jp; Kazuhito.Yokoi@aist.go.jp
RI Kanehiro, Fumio/L-8660-2016; KANEKO, Kenji/M-5360-2016; Kajita,
Shuuji/M-5010-2016; Yokoi, Kazuhito/K-2046-2012
OI Kanehiro, Fumio/0000-0002-0277-3467; KANEKO, Kenji/0000-0002-1888-8787;
Kajita, Shuuji/0000-0001-8188-2209; Yokoi, Kazuhito/0000-0003-3942-2027
CR Boone GN, 1997, AUTON ROBOT, V4, P259, DOI 10.1023/A:1008891909459
BOONE GN, 1995, IROS '95 - 1995 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT
ROBOTS AND SYSTEMS: HUMAN ROBOT INTERACTION AND COOPERATIVE ROBOTS, PROCEEDINGS,
VOL 3, P158, DOI 10.1109/IROS.1995.525878
Brady RA, 2000, J BIOMECH, V33, P803, DOI 10.1016/S0021-9290(00)00037-3
Hanson JP, 1999, ERGONOMICS, V42, P1619, DOI 10.1080/001401399184712
Harada K, 2006, J DYN SYST-T ASME, V128, P422, DOI 10.1115/1.2199857
Hashimoto K., 2011, P IEEE INT C ROB AUT, P2041
Kajita S, 2004, P IEEE RSJ INT C INT, P3546
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Kaneko K., 2005, P IEEE RSJ INT C INT, P1457
Momoi T., 2006, P IEEE PESC, p560/1
Koeda M., 2011, P 8 AS CONTR C, P1346
Koeda M., 2007, P 25 ANN C ROB SOC J, p3H15
Koeda M., 2009, P 12 INT C CLIMB WAL, P1007
Koeda M., 2011, P IEEE INT C ROB AUT, P593
Lim HO, 2008, P ROY SOC A-MATH PHY, V464, P273, DOI 10.1098/rspa.2007.1908
LYNCH KM, 1993, P IEEE RSJ INT C INT, P186
Bartels T., 2010, IND TRIBOLOGY TRIBOS
Miura K., 2010, P 13 INT C CLIMB WAL, P761
MIURA K, 2012, P IEEE INT C ROB AUT, P3527
Miura K, 2010, IEEE INT CONF ROBOT, P4249, DOI 10.1109/ROBOT.2010.5509520
Miura K, 2008, IEEE-RAS INT C HUMAN, P279, DOI 10.1109/ICHR.2008.4755965
Nakagawa Y., 2004, THESIS TOKAI U KUMAM
Ott C., 2011, P IEEE RAS INT C HUM, P26
Park JH, 2001, IEEE INT CONF ROBOT, P4134, DOI 10.1109/ROBOT.2001.933264
REDFERN MS, 1994, ERGONOMICS, V37, P511, DOI 10.1080/00140139408963667
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
Yoshida E, 2008, IEEE INT CONF ROBOT, P3181
Yoshida E, 2010, AUTON ROBOT, V28, P77, DOI 10.1007/s10514-009-9143-x
Zhu C., 2006, P IEEE RSJ INT C INT, P5762
Nishikawa M., 2010, Japanese Patent, Patent No. [4 508 681, 4508681]
Takahashi H., 1999, Japanese Patent, Patent No. [2 911 985, 2911985]
NR 31
TC 7
Z9 7
U1 1
U2 32
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD AUG
PY 2013
VL 29
IS 4
BP 875
EP 887
DI 10.1109/TRO.2013.2257574
PG 13
WC Robotics
SC Robotics
GA 197FZ
UT WOS:000322836600006
DA 2018-01-22
ER

PT J
AU Satoh, S
Fujimoto, K
Hyon, SH
AF Satoh, Satoshi
Fujimoto, Kenji
Hyon, Sang-Ho
TI Gait generation via unified learning optimal control of Hamiltonian
systems
SO ROBOTICA
LA English
DT Article
DE Gait generation; Biped robots; Repetitive control; Iterative learning
control; Hamiltonian systems
ID PASSIVE DYNAMIC WALKING; HUMANOID ROBOT
AB This paper proposes a repetitive control type optimal gait generation framework
by executing learning control and parameter tuning. We propose a learning optimal
control method of Hamiltonian systems unifying iterative learning control (ILC) and
iterative feedback tuning (IFT). It allows one to simultaneously obtain an optimal
feedforward input and tuning parameter for a plant system, which minimizes a given
cost function. In the proposed method, a virtual constraint by a potential energy
prevents a biped robot from falling. The strength of the constraint is
automatically mitigated by the IFT part of the proposed method, according to the
progress of trajectory learning by the ILC part.
C1 [Satoh, Satoshi] Hiroshima Univ, Fac Engn, Div Mech Syst & Appl Mech,
Higashihiroshima 7398527, Japan.
[Fujimoto, Kenji] Kyoto Univ, Grad Sch Engn, Dept Aerosp Engn, Sakyo Ku, Kyoto
6068501, Japan.
[Hyon, Sang-Ho] Ritsumeikan Univ, Dept Robot, Kusatsu, Shiga 5258577, Japan.
RP Satoh, S (reprint author), Hiroshima Univ, Fac Engn, Div Mech Syst & Appl Mech,
1-4-1 Kagamiyama, Higashihiroshima 7398527, Japan.
EM s.satoh@ieee.org
RI satoh, satoshi/D-8422-2012
OI satoh, satoshi/0000-0003-4017-5123
FU JSPS [22860041]
FX This work was supported in part by JSPS Grant-in-Aid for Research
Activity Start-up (No. 22860041). The authors thank Dr Fumihiko Asano
and Dr Masaki Yamakita for their helpful comments and discussions on
this work.
CR ARIMOTO S, 1984, J ROBOTIC SYST, V1, P123, DOI 10.1002/rob.4620010203
Asano F, 2005, IEEE T ROBOT, V21, P754, DOI 10.1109/TRO.2005.847610
Asano F, 2004, IEEE T ROBOTIC AUTOM, V20, P565, DOI 10.1109/TRA.2004.824685
Bruyne F. D., 1997, P 36 IEEE C DEC CONT, V4, P3749
Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Crouch P. E., 1987, LECT NOTES CONTROL I, V101
Endo G, 2008, INT J ROBOT RES, V27, P213, DOI 10.1177/0278364907084980
Fujimoto K, 2003, IEEE T AUTOMAT CONTR, V48, P1756, DOI 10.1109/TAC.2003.817908
Fujimoto K, 2001, SYST CONTROL LETT, V42, P217, DOI 10.1016/S0167-6911(00)00091-
8
Fujimoto K, 2008, P 17 IFAC WORLD C, P15678
Garcia M, 1998, J BIOMECH ENG-T ASME, V120, P281, DOI 10.1115/1.2798313
Goswami A, 1997, AUTON ROBOT, V4, P273, DOI 10.1023/A:1008844026298
Gregorio P, 1997, IEEE T SYST MAN CY B, V27, P626, DOI 10.1109/3477.604106
Grizzle JW, 2001, IEEE T AUTOMAT CONTR, V46, P51, DOI 10.1109/9.898695
HARA S, 1988, IEEE T AUTOMAT CONTR, V33, P659, DOI 10.1109/9.1274
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hjalmarsson H, 2002, INT J ADAPT CONTROL, V16, P373, DOI 10.1002/acs.714
Hyon S.-H., 2005, P IEEE INT C ROB AUT, P1455
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Kappen HJ, 2007, AIP CONF PROC, V887, P149
Maschke B.M., 1992, P IFAC S NOLCOS BORD, P282
MCGEER T, 1990, INT J ROBOT RES, V9, P62, DOI 10.1177/027836499000900206
Morimoto J., 2002, P 5 INT C CLIMB WALK, P453
Osuka K., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International
Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065),
P3052, DOI 10.1109/ROBOT.2000.846491
Sano A., 2003, P IEEE INT C ROB AUT, P2478
Satoh S., 2008, P 17 IFAC WORLD C, P1729
Satoh S., 2011, J ROBOT SOC JAPAN, V29, P90
Satoh S., 2009, P ICROS SICE INT JOI, P1244
Satoh S., 2010, THESIS NAGOYA U AICH
Satoh S., 2011, P 18 IFAC WORLD C, P6912
Satoh S, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT
SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P3426, DOI 10.1109/IROS.2008.4650860
Satoh S, 2008, IEEE DECIS CONTR P, P4951, DOI 10.1109/CDC.2008.4738733
Satoh S, 2010, IEEE DECIS CONTR P, P1423, DOI 10.1109/CDC.2010.5718033
Spong M. W., 1999, Proceedings of the 14th World Congress. International
Federation of Automatic Control, P19
Takanishi A., 1985, Proceedings of '85 International Conference on Advanced
Robotics, P459
Tedrake R., 2004, P IEEE INT C INT ROB, P2849
Theodorou E., 2010, J MACH LEARN RES, V11, P3153
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
NR 38
TC 2
Z9 3
U1 1
U2 12
PU CAMBRIDGE UNIV PRESS
PI NEW YORK
PA 32 AVENUE OF THE AMERICAS, NEW YORK, NY 10013-2473 USA
SN 0263-5747
J9 ROBOTICA
JI Robotica
PD AUG
PY 2013
VL 31
BP 717
EP 732
DI 10.1017/S0263574712000756
PN 5
PG 16
WC Robotics
SC Robotics
GA 186IR
UT WOS:000322037100004
DA 2018-01-22
ER

PT J
AU Obata, S
AF Obata, Shuji
TI A Muscle Motion Solenoid Actuator
SO ELECTRICAL ENGINEERING IN JAPAN
LA English
DT Article
DE solenoid; actuator; humanoid robot; development; theoretical calculation
AB It is one of our dreams to mechanically recover lost body capabilities for
damaged humans. Realistic humanoid robots composed of such machines require muscle
motion actuators controlled by all pulling actions. In particular, antagonistic
pairs of bi-articular muscles are very important in animal motions. A system of
actuators is proposed using the electromagnetic force of solenoids with a stroke
length of over 10 cm and a strength of about 20 N, which are needed to move the
real human arm. The devised actuators are based on the development of recent modern
electromagnetic materials, where traditional materials cannot give such a
possibility. Composite actuators are controlled by a high-capability computer and
software to perform genuine motions. (c) 2013 Wiley Periodicals, Inc. Electr Eng
Jpn, 184(2): 1019, 2013; Published online in Wiley Online Library
(wileyonlinelibrary.com). DOI 10.1002/eej.22369
C1 Tokyo Denki Univ, Tokyo, Japan.
RP Obata, S (reprint author), Tokyo Denki Univ, Tokyo, Japan.
CR Geeplus Europe Limited, TUB SOL GEN INF
Kajita S., 2005, HUMANOID ROBOTS
Kawashima K, 2004, IEEE INT CONF ROBOT, P4937, DOI 10.1109/ROBOT.2004.1302500
Kumamoto M, 2006, ARTIFICIAL MUSCLE AC, P30
Matsui H, 2011, 14 M IEEJ TOK SECT, P7
Obata S., 2010, INT S ISAB2010, P159
Obata S., 2008, MECH 7 FRANC JAP C F, P138
Obata S, 2010, INT S MECHATRONICS 2
Yatchev I., 2010, WORLD ACAD SCI ENG T, V66, P1591
[Anonymous], 2010, ADV ROBOTICS, V24
NR 10
TC 0
Z9 0
U1 0
U2 16
PU WILEY-BLACKWELL
PI HOBOKEN
PA 111 RIVER ST, HOBOKEN 07030-5774, NJ USA
SN 0424-7760
EI 1520-6416
J9 ELECTR ENG JPN
JI Electr. Eng. Jpn.
PD JUL 30
PY 2013
VL 184
IS 2
BP 10
EP 19
DI 10.1002/eej.22369
PG 10
WC Engineering, Electrical & Electronic
SC Engineering
GA 132BI
UT WOS:000318042300002
DA 2018-01-22
ER

PT J
AU Sung, C
Kagawa, T
Uno, Y
AF Sung, ChangHyun
Kagawa, Takahiro
Uno, Yoji
TI Whole-Body Motion Planning for Humanoid Robots by Specifying Via-Points
Regular Paper
SO INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
LA English
DT Article
DE Decentralized Control; High-Order Neural Networks; Extended Kalman
Filter; Backstepping
ID GENERATION; MODEL
AB We design a framework about the planning of whole body motion for humanoid
robots. Motion planning with various constraints is essential to success the task.
In this research, we propose a motion planning method corresponding to various
conditions for achieving the task. We specify some via-points to deal with the
conditions for target achievement depending on various constraints. Together with
certain constraints including task accomplishment, the via-point representation
plays a crucial role in the optimization process of our method. Furthermore, the
via-points as the optimization parameters are related to some physical conditions.
We applied this method to generate the kicking motion of a humanoid robot HOAP-3.
We have confirmed that the robot was able to complete the task of kicking a ball
over an obstacle into a goal in addition to changing conditions of the location of
a ball. These results show that the proposed motion planning method using via-point
representation can increase articulation of the motion.
C1 [Sung, ChangHyun; Kagawa, Takahiro; Uno, Yoji] Nagoya Univ, Dept Mech Sci &
Engn, Grad Sch Engn, Nagoya, Aichi 4648601, Japan.
RP Sung, C (reprint author), Nagoya Univ, Dept Mech Sci & Engn, Grad Sch Engn,
Nagoya, Aichi 4648601, Japan.
EM s_changhyun@nuem.nagoya-u.ac.jp
FU [21300092]; [23560526]
FX The authors would like to thank Associate Prof. Kouichi Taji for his
helpful suggestions. This study was supports by a Grant-in-Aid for
Scientific Research (B) No.21300092 and (C) No.23560526.
CR Choi JY, 2005, IEEE INT CONF ROBOT, P2834
FLASH T, 1985, J NEUROSCI, V5, P1688
Guarino Lo Bianco C, 2002, INT J CONTROL, V75, P967, DOI
10.1080/00207170210156161
HETTICH R, 1993, SIAM REV, V35, P380, DOI 10.1137/1035089
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
Kajita S, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1644
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kudoh S., 2002, INT C INT ROB SYST I, P2563
LEE ETY, 1982, COMPUTING, V29, P365, DOI 10.1007/BF02246763
LEE SH, 2009, HUMANOID ROBOTS, P167
Lengagne S, 2011, IEEE T ROBOT, V27, P1095, DOI 10.1109/TRO.2011.2162998
Lengagne S, 2010, IEEE INT C INT ROBOT, P698, DOI 10.1109/IROS.2010.5649233
Miossec S., 2006, IEEE INT C ROB BIOM, P17
Miyamoto H, 1996, NEURAL NETWORKS, V9, P1281, DOI 10.1016/S0893-6080(96)00043-3
Murray RM, 1994, MATH INTRO ROBOTIC M
Piao SH, 2011, INT J ADV ROBOT SYST, V8, P22
Piazzi A, 2000, IEEE T IND ELECTRON, V47, P140, DOI 10.1109/41.824136
Sugiyama T, 2002, INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, VOLS I AND
II, PROCEEDINGS, P1404, DOI 10.1109/CIE.2002.1186269
Sung C-H., 2011, IEEE INT C UB ROB AM, P362
UNO Y, 1989, BIOL CYBERN, V61, P89
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Xu Y, 2010, LECT NOTES ARTIF INT, V6359, P392
NR 22
TC 4
Z9 4
U1 0
U2 10
PU SAGE PUBLICATIONS INC
PI THOUSAND OAKS
PA 2455 TELLER RD, THOUSAND OAKS, CA 91320 USA
SN 1729-8814
J9 INT J ADV ROBOT SYST
JI Int. J. Adv. Robot. Syst.
PD JUL 26
PY 2013
VL 10
AR 300
DI 10.5772/56747
PG 12
WC Robotics
SC Robotics
GA 190AI
UT WOS:000322309000001
OA gold
DA 2018-01-22
ER

PT J
AU Shimoda, S
Yoshihara, Y
Kimura, H
AF Shimoda, Shingo
Yoshihara, Yuki
Kimura, Hidenori
TI Adaptability of Tacit Learning in Bipedal Locomotion
SO IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT
LA English
DT Article
DE Biological control; bipedal walking; compound control; unsupervised
learning
ID COGNITIVE DEVELOPMENTAL ROBOTICS; MODEL
AB The capability of adapting to unknown environmental situations is one of the
most salient features of biological regulations. This capability is ascribed to the
learning mechanisms of biological regulatory systems that are totally different
from the current artificial machine-learning paradigm. We consider that all
computations in biological regulatory systems result from the spatial and temporal
integration of simple and homogeneous computational media such as the activities of
neurons in brain and protein-protein interactions in intracellular regulations.
Adaptation is the outcome of the local activities of the distributed computational
media. To investigate the learning mechanism behind this computational scheme, we
proposed a learning method that embodies the features of biological systems, termed
tacit learning. In this paper, we elaborate this notion further and applied it to
bipedal locomotion of a 36DOF humanoid robot in order to discuss the adaptation
capability of tacit learning comparing with that of conventional control
architectures and that of human beings. Experiments on walking revealed a
remarkably high adaptation capability of tacit learning in terms of gait
generation, power consumption and robustness.
C1 [Shimoda, Shingo; Yoshihara, Yuki; Kimura, Hidenori] RIKEN BSI Toyota Collaborat
Ctr, Moriyama Ku, Nagoya, Aichi 4630003, Japan.
RP Shimoda, S (reprint author), RIKEN BSI Toyota Collaborat Ctr, Moriyama Ku,
Nagoya, Aichi 4630003, Japan.
EM shimoda@brain.riken.jp
FU Toyota Motor Company
FX The authors would like to thank the Toyota Motor Company for their
technical and financial assistance.
CR Asada M, 2001, ROBOT AUTON SYST, V37, P185, DOI 10.1016/S0921-8890(01)00157-9
Asada M, 2009, IEEE T AUTON MENT DE, V1, P12, DOI 10.1109/TAMD.2009.2021702
Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
Brooks R. A., IEEE J ROBOT AUTOM, V2, P12
Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Doya K, 2000, NEURAL COMPUT, V12, P219, DOI 10.1162/089976600300015961
Hebb D. O., 1949, ORG BEHAV
Hikita M., 2008, P INT C DEV LEARN
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Holmberg H, 1997, J NEUROSCI, V17, P2071
Ikemoto T., 2008, P IEEE RSJ INT C HUM
ITO M, 1982, NEUROSCI LETT, V33, P253, DOI 10.1016/0304-3940(82)90380-9
Janeway Jr C. A, 2001, IMMUNOBIOLOGY IMMUNE
Juang JG, 2000, IEEE T SYST MAN CY B, V30, P594, DOI 10.1109/3477.865178
KAWATO M, 1987, BIOL CYBERN, V57, P169, DOI 10.1007/BF00364149
Kimura H, 2007, J THEOR BIOL, V248, P590, DOI 10.1016/j.jtbi.2007.06.017
Kun A, 1996, IEEE INT CONF ROBOT, P240, DOI 10.1109/ROBOT.1996.503784
Kuniyoshi Y, 2006, BIOL CYBERN, V95, P589, DOI 10.1007/s00422-006-0127-z
Matsubara J. Morimoto, 2006, ROBOT AUTON SYST, V54, P911
McCulloch Warren S., 1943, BULL MATH BIOPHYS, V5, P115, DOI 10.1007/BF02478259
Minato T., 2007, P IEEE RSJ INT C HUM
Miura A., 2010, P 2010 IEEE INT C RO, P5076
Shimoda Shingo, 2008, SICE Journal of Control, Measurement, and System
Integration, V1, P275
Shimoda S., 2010, P 2010 IEEE RSJ INT
Shimoda S, 2010, IEEE T SYST MAN CY B, V40, P77, DOI 10.1109/TSMCB.2009.2014470
Tanaka RJ, 2008, J THEOR BIOL, V251, P363, DOI 10.1016/j.jtbi.2007.11.023
Tanaka RJ, 2006, BIOPHYS J, V91, P1235, DOI 10.1529/biophysj.106.081828
Wang HF, 2009, IEEE T SYST MAN CY B, V39, P1348, DOI 10.1109/TSMCB.2009.2015281
Weng JY, 2001, SCIENCE, V291, P599, DOI 10.1126/science.291.5504.599
ZADEH LA, 1973, IEEE T SYST MAN CYB, VSMC3, P28, DOI 10.1109/TSMC.1973.5408575
Zehr EP, 2006, J APPL PHYSIOL, V101, P1783, DOI 10.1152/japplphysiol.00540.2006
NR 31
TC 9
Z9 9
U1 0
U2 8
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1943-0604
J9 IEEE T AUTON MENT DE
JI IEEE Trans. Auton. Ment. Dev.
PD JUN
PY 2013
VL 5
IS 2
BP 152
EP 161
DI 10.1109/TAMD.2013.2248007
PG 10
WC Computer Science, Artificial Intelligence; Robotics; Neurosciences
SC Computer Science; Robotics; Neurosciences & Neurology
GA 166TJ
UT WOS:000320580500004
DA 2018-01-22
ER

PT J
AU Matsubara, T
Shinohara, D
Kidode, M
AF Matsubara, Takamitsu
Shinohara, Daisuke
Kidode, Masatsugu
TI Reinforcement learning of a motor skill for wearing a T-shirt using
topology coordinates
SO ADVANCED ROBOTICS
LA English
DT Article
DE motor skill learning; T-shirt wearing; topology coordinates;
reinforcement learning
ID POLICY GRADIENT-METHOD; BIPED LOCOMOTION; HUMANOID ROBOT; ACQUISITION;
TASKS
AB This article focuses on learning motor skills for anthropomorphic robots that
must interact with non-rigid materials to perform such tasks as wearing clothes,
turning socks inside out, and applying bandages. We propose a novel reinforcement
learning framework for learning motor skills that interact with non-rigid
materials. Our learning framework focuses on the topological relationship between
the configuration of the robot and the non-rigid material based on the
consideration that most details of the material (e.g. wrinkles) are not important
for performing the motor tasks. This focus allows us to define the task performance
and provide reward signals based on a low-dimensional variable, i.e. topology
coordinates, in a real environment using reliable sensors. We constructed an
experimental setting with an anthropomorphic dual-arm robot and a tailor-made T-
shirt for it. To demonstrate the feasibility of our framework, a robot performed a
T-shirt wearing task, whose goal was to put both of its arms into the corresponding
sleeves of the T-shirt. The robot acquired sequential movements that put both of
its arms into the T-shirt.
C1 [Matsubara, Takamitsu; Shinohara, Daisuke; Kidode, Masatsugu] Nara Inst Sci &
Technol, Grad Sch Informat Sci, Nara 6300192, Japan.
RP Matsubara, T (reprint author), Nara Inst Sci & Technol, Grad Sch Informat Sci,
8916-5 Takayama Cho, Nara 6300192, Japan.
EM takam-m@is.naist.jp
CR Balaguer B, 2011, IEEE INT C INT ROBOT, P1405, DOI 10.1109/IROS.2011.6048669
Bentivegna DC, 2004, ROBOT AUTON SYST, V47, P163, DOI
10.1016/j.robot.2004.03.010
Endo G, 2008, INT J ROBOT RES, V27, P213, DOI 10.1177/0278364907084980
Ho ESL, 2009, COMPUT GRAPH FORUM, V28, P299, DOI 10.1111/j.1467-
8659.2009.01369.x
KITA Y, 2009, P IEEE INT C ROB AUT, P1220
Konda VR, 2003, SIAM J CONTROL OPTIM, V42, P1143, DOI 10.1137/S0363012901385691
Maitin-Shepard J, 2010, IEEE INT CONF ROBOT, P2308, DOI
10.1109/ROBOT.2010.5509439
Matsubara T, 2006, ROBOT AUTON SYST, V54, P911, DOI 10.1016/j.robot.2006.05.012
Matsubara T, 2008, ADV ROBOTICS, V22, P1125, DOI 10.1163/156855308X324785
Miyamoto H, 1996, NEURAL NETWORKS, V9, P1281, DOI 10.1016/S0893-6080(96)00043-3
Morimoto J, 2001, ROBOT AUTON SYST, V36, P37, DOI 10.1016/S0921-8890(01)00113-0
Nakaoka S, 2004, IEEE INT CONF ROBOT, P610, DOI 10.1109/ROBOT.2004.1307216
Peters J, 2008, NEURAL NETWORKS, V21, P682, DOI 10.1016/j.neunet.2008.02.003
Pohl W, 1968, J MATH MECH, V17, P875
Sutton RS, 2000, ADV NEUR IN, V12, P1057
Sutton RS, 1998, REINFORCEMENT LEARNI
TEDRAKE R, 2004, P IEEE INT C INT ROB, V3, P2849
Ueda J, 2004, ADV ROBOTICS, V18, P101, DOI 10.1163/156855304322753326
WADA Y, 1995, BIOL CYBERN, V73, P3, DOI 10.1007/BF00199051
Yamazaki Kimitoshi, 2009, P IAPR C MACH VIS AP, P366
NR 20
TC 6
Z9 6
U1 1
U2 4
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
J9 ADV ROBOTICS
JI Adv. Robot.
PD MAY 1
PY 2013
VL 27
IS 7
SI SI
BP 513
EP 524
DI 10.1080/01691864.2013.777012
PG 12
WC Robotics
SC Robotics
GA 122BA
UT WOS:000317288700004
DA 2018-01-22
ER

PT J
AU Hashimoto, K
Takezaki, Y
Lim, HO
Takanishi, A
AF Hashimoto, Kenji
Takezaki, Yuki
Lim, Hun-Ok
Takanishi, Atsuo
TI Walking stabilization based on gait analysis for biped humanoid robot
SO ADVANCED ROBOTICS
LA English
DT Article
DE humanoid; biped robot; gait analysis; walking stabilization
ID REALIZATION; SURFACE; BALANCE
AB This study describes a biped walking stabilization based on gait analysis for a
humanoid robot. So far, we have developed a humanoid robot as a human motion
simulator which can quantitatively evaluate welfare and rehabilitation instruments
instead of human subjects. However, the walking motion of the robot looked like
human's in our past researches, but a walking stabilization control was not based
on gait analysis. To use a humanoid robot as a human motion simulator, not only
mechanisms but also a stabilizer should be designed based on human beings. Of
course, there are many studies on gait analysis in the field of neuroscience, but
most of them are not modeled enough to be implemented on humanoid robots.
Therefore, first, we conducted gait analysis in this study, and we obtained
following two findings: (i) a foot-landing point exists on the line joining the
stance leg and the projected point of center of mass on the ground, and (ii) the
distance between steps is modified to keep mechanical energy at the landing within
a certain value. A walking stabilization control is designed based on the gait
analysis. Verification of the proposed control is conducted through experiments
with a human-sized humanoid robot WABIAN-2R.
C1 [Hashimoto, Kenji; Takezaki, Yuki] Waseda Univ, Grad Sch Creat Sci & Engn,
Shinjuku Ku, Tokyo 1620044, Japan.
[Lim, Hun-Ok] Kanagawa Univ, Dept Mech Engn, Kanagawa Ku, Yokohama, Kanagawa
2218686, Japan.
[Takanishi, Atsuo] Waseda Univ, Dept Modern Mech Engn, Shinjuku Ku, Tokyo
1628480, Japan.
[Lim, Hun-Ok; Takanishi, Atsuo] Waseda Univ, HRI, Shinjuku Ku, Tokyo 1628480,
Japan.
RP Hashimoto, K (reprint author), Waseda Univ, Grad Sch Creat Sci & Engn, Shinjuku
Ku, 41-304,17 Kikui Cho, Tokyo 1620044, Japan.
EM k-hashimoto@takanishi.mech.waseda.ac.jp
RI Hashimoto, Kenji/M-6442-2017
OI Hashimoto, Kenji/0000-0003-2300-6766
FU RoboSoM project from the European FP7 program [248366]; MEXT/JSPS
KAKENHI [24360099]; Global COE, MEXT, Japan; SolidWorks Japan K.K.;
DYDEN Corporation
FX This study was conducted as part of the Research Institute for Science
and Engineering, Waseda University, and as part of the humanoid project
at the Humanoid Robotics Institute, Waseda University. It was also
supported in part by RoboSoM project from the European FP7 program
(Grant agreement No. 248366), MEXT/JSPS KAKENHI Grant Number (24360099),
Global COE Program 'Global Robot Academia', MEXT, Japan, SolidWorks
Japan K. K., and DYDEN Corporation, whom we thank for their financial
and technical support. The high-performance physical modeling and
simulation software MapleSim used in our research was provided by
Cybernet Systems Co., Ltd. (Vendor: Waterloo Maple Inc.).
CR Abdallah M, 2005, IEEE INT CONF ROBOT, P1996
Borghese NA, 1996, J PHYSIOL-LONDON, V494, P863, DOI
10.1113/jphysiol.1996.sp021539
Cho BK, 2008, IEEE-RAS INT C HUMAN, P299, DOI 10.1109/ICHR.2008.4755968
Guihard M, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2587, DOI 10.1109/IRDS.2002.1041660
Hashimoto K., 2010, P 2010 IEEE RSJ INT, P2206
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
HORAK FB, 1986, J NEUROPHYSIOL, V55, P1369
Hyon S. -H., 2009, P IEEE INT C ROB AUT, P1549
Kajita S., 1990, Proceedings. IROS '90. IEEE International Workshop on
Intelligent Robots and Systems '90. Towards a New Frontier of Applications (Cat.
No.90TH0332-7), P789, DOI 10.1109/IROS.1990.262497
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
Kajita S, 2010, P IEEE RSJ INT C INT, P4489, DOI DOI 10.1109/IROS.2010.5651082
Kanamiya Y, 2010, IEEE INT CONF ROBOT, P3446, DOI 10.1109/ROBOT.2010.5509785
Kang HJ, 2010, IEEE INT CONF ROBOT, P5167, DOI 10.1109/ROBOT.2010.5509348
Kapandji I. A., 1988, PHYSL JOINTS LOWER L, V2
Kim JH, 2004, IEEE INT CONF ROBOT, P623
Kondo H., 2008, P 2 IEEE RAS EMBS IN, P724
Kouchi M., 2005, AIST RES INFORM DATA
Morisawa M, 2010, IEEE INT C INT ROBOT, P3150, DOI 10.1109/IROS.2010.5651595
Nishiwaki K., 2010, P IEEE INT C ROB AUT, P4230
Ogura Y., 2006, P 1 IEEE RAS EMBS IN, P835
Ogura Y., 2006, P IEEE RSJ INT C INT, P3976
Ogura Y, 2006, IEEE INT CONF ROBOT, P76
Perry J, 1992, GAIT ANAL NORMAL PAT
POZZO T, 1990, EXP BRAIN RES, V82, P97
Pratt JE, 2007, IEEE INT CONF ROBOT, P4653, DOI 10.1109/ROBOT.2007.364196
SHUMWAY-COOK A, 1989, Seminars in Hearing, V10, P196
Stephens B. J., 2010, P 2010 IEEE RAS INT, P52
Sugahara Y, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P595
Sugihara T, 2012, IEEE INT C INT ROBOT, P1892, DOI 10.1109/IROS.2012.6385699
Takanishi A., 1990, Proceedings. IROS '90. IEEE International Workshop on
Intelligent Robots and Systems '90. Towards a New Frontier of Applications (Cat.
No.90TH0332-7), P795, DOI 10.1109/IROS.1990.262498
NR 30
TC 9
Z9 9
U1 1
U2 30
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD MAY 1
PY 2013
VL 27
IS 7
SI SI
BP 541
EP 551
DI 10.1080/01691864.2013.777015
PG 11
WC Robotics
SC Robotics
GA 122BA
UT WOS:000317288700006
DA 2018-01-22
ER

PT J
AU Escande, A
Kheddar, A
Miossec, S
AF Escande, Adrien
Kheddar, Abderrahmane
Miossec, Sylvain
TI Planning contact points for humanoid robots
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Multi-contact planning; Humanoid robot; Posture optimization
ID GENERATION
AB We present a planner for underactuated hyper-redundant robots, such as humanoid
robots, for which the movement can only be initiated by taking contacts with the
environment. We synthesize our previous work on the subject and go further into
details to give an in-depth description of the contact planning problem and the
mechanisms of our contact planner. We highlight the structure of the problem with a
simple example, present the contact space and the heuristics we use for planning,
and explain thoroughly the implementation of the different parts of the planner.
Finally we give examples of planning results for complex scenarios with the
humanoid robot HRP-2. (C) 2013 Elsevier B.V. All rights reserved.
C1 [Escande, Adrien; Kheddar, Abderrahmane] CNRS AIST Joint Robot Lab, CRT UMI3218,
Tsukuba, Ibaraki, Japan.
[Kheddar, Abderrahmane] CNRS UM2 LIRMM, Montpellier, France.
[Miossec, Sylvain] Univ Orleans, PRISME, IUT Bourges, F-45067 Orleans, France.
RP Escande, A (reprint author), CNRS AIST Joint Robot Lab, CRT UMI3218, Tsukuba,
Ibaraki, Japan.
CR Bidaud P., 2008, ROMANSY 08
BONNANS F, 2002, NUMERICAL OPTIMIZATI
Bouyarmane K., 2011, IEEE INT C ROB AUT
Bouyarmane K., 2011, IEEE RSJ INT C INT R
Bouyarmane K., 2009, IEEE INT C ROB AUT
Bouyarmane K., 2010, IEEE RAS INT C HUM R, P8, DOI [10.1109/ICHR.2010.5686317,
DOI 10.1109/ICHR.2010.5686317]
Bouyarmane K., 2011, IEEE RAS INT C HUM R
BRETL T, 2005, THESIS STANFORD U
Bretl T, 2008, IEEE T ROBOT, V24, P794, DOI 10.1109/TRO.2008.2001360
Coros S, 2010, ACM T GRAPHIC, V29, DOI 10.1145/1778765.1781156
Cortes J., 2003, THESIS
Duindam V, 2009, SPRINGER TRAC ADV RO, V53, P1
Kheddar A., 2007, IEEE RAS C HUM ROB P
Escande A., 2006, IEEE RSJ INT C ROB I, P2974
Escande A., 2008, INT S EXP ROB ATH GR
ESCANDE A, 2009, IFAC 9 INT S ROB CON, P259
Escande A, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P435, DOI 10.1109/IROS.2009.5354371
Fukuda K., 1996, Combinatorics and Computer Science. 8th Franco-Japanese and 4th
Franco-Chinese Conference. Selected Papers, P91
Garsault S., 2008, NONGAITED DYNAMIC MO
Hauser K., 2007, INT S ROB RES
Hauser K., 2006, WORKSH ALG FDN ROB
Hauser K., 2005, IEEE RSJ INT C HUM R, P7
Herdt A., 2010, IEEE RSJ INT C INT R, P190
Herdt A, 2010, ADV ROBOTICS, V24, P719, DOI 10.1163/016918610X493552
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kalisiak M, 2001, J VISUAL COMP ANIMAT, V12, P117, DOI 10.1002/vis.250
Kanehiro F, 2005, IEEE INT CONF ROBOT, P1072
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Latombe J.-C., 1991, ROBOT MOTION PLANNIN
Lawrence C., 1997, USERS GUIDE CFSQP VE
Lefebvre O., 2005, IEEE INT C ROB AUT
Lengagne S, 2010, IEEE INT C INT ROBOT, P698, DOI 10.1109/IROS.2010.5649233
Liu Minjie, 2011, IEEE INT C ROB AUT
Hasegawa Y., 2010, IEEE RSJ INT C INT R, P166
MIOSSEC S, 2006, DEV SOFTWARE MOTION
Mordatch I, 2010, ACM T GRAPHICS, V29
Popovic Z., 2009, ACM T GRAPHICS SIGGR, V28
Micaelli A., 2011, INT S DIG HUM MOD LY
Samson C., 1991, ROBOT CONTROL TASK F
Slovich M., 2011, IEEE RAS INT C HUM R
Simeon T, 2004, INT J ROBOT RES, V23, P729, DOI 10.1177/0278364904045471
Stilman M., 2007, IEEE RSJ INT C ROB I
Takenaka T, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P1084, DOI 10.1109/IROS.2009.5354662
Takenaka T, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P1601, DOI 10.1109/IROS.2009.5354522
Takenaka T, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P1092, DOI 10.1109/IROS.2009.5354654
Takenaka T, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P1594, DOI 10.1109/IROS.2009.5354542
VANDENBERGEN G, 2004, MORGAN KAUFMANN SERI
Yokoi K., 2006, IEEE RAS INT C HUM R, P117
Wieber P.-B., 2002, 3 IARP INT WORKSH HU, P53
Wu JC, 2010, ACM T GRAPHIC, V29, P72
Yang J., 2004, 10 AIAA ISSMO MULT A
YIN K, 2007, ACM T GRAPHICS SIGGR, V26
Sanada H., 2009, IEEE INT C ROB AUT
NR 53
TC 29
Z9 30
U1 0
U2 10
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD MAY
PY 2013
VL 61
IS 5
BP 428
EP 442
DI 10.1016/j.robot.2013.01.008
PG 15
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 125RK
UT WOS:000317558800002
DA 2018-01-22
ER

PT J
AU Yamaguchi, A
Takamatsu, J
Ogasawara, T
AF Yamaguchi, Akihiko
Takamatsu, Jun
Ogasawara, Tsukasa
TI DCOB: Action space for reinforcement learning of high DoF robots
SO AUTONOMOUS ROBOTS
LA English
DT Article
DE Reinforcement learning; Action space; Motion learning; Humanoid robot;
Crawling
ID APPROXIMATION; ALGORITHM
AB Reinforcement learning (RL) for robot control is an important technology for
future robots since it enables us to design a robot's behavior using the reward
function. However, RL for high degree-of-freedom robot control is still an open
issue. This paper proposes a discrete action space DCOB which is generated from the
basis functions (BFs) given to approximate a value function. The remarkable feature
is that, by reducing the number of BFs to enable the robot to learn quickly the
value function, the size of DCOB is also reduced, which improves the learning
speed. In addition, a method WF-DCOB is proposed to enhance the performance, where
wire-fitting is utilized to search for continuous actions around each discrete
action of DCOB. We apply the proposed methods to motion learning tasks of a
simulated humanoid robot and a real spider robot. The experimental results
demonstrate outstanding performance.
C1 [Yamaguchi, Akihiko; Takamatsu, Jun; Ogasawara, Tsukasa] Nara Inst Sci &
Technol, Grad Sch Informat Sci, Nara 6300192, Japan.
RP Yamaguchi, A (reprint author), Nara Inst Sci & Technol, Grad Sch Informat Sci,
8916-5 Takayama, Nara 6300192, Japan.
EM ay@akiyam.sakura.ne.jp
OI Yamaguchi, Akihiko/0000-0001-9060-3478
FU Japan Society for the Promotion of Science [22.9030]
FX Part of this work was supported by a Grant-in-Aid for JSPS, Japan
Society for the Promotion of Science, Fellows (22.9030).
CR Asada M, 1996, IROS 96 - PROCEEDINGS OF THE 1996 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS - ROBOTIC INTELLIGENCE INTERACTING
WITH DYNAMIC WORLDS, VOLS 1-3, P1502, DOI 10.1109/IROS.1996.569012
Baird L.C., 1993, WLTR931147 WRIGHT PA
BARRON AR, 1993, IEEE T INFORM THEORY, V39, P930, DOI 10.1109/18.256500
Doya K, 2002, NEURAL COMPUT, V14, P1347, DOI 10.1162/089976602753712972
Gaskett C., 2000, IEEE RSJ INT C INT R
Ijspeert A. J., 2003, ADV NEURAL INFORM PR, V15, P1547
KIMURA H, 2001, P 40 IEEE C DEC CONT
Kirchner F, 1998, ROBOT AUTON SYST, V25, P253, DOI 10.1016/S0921-8890(98)00054-2
Kober J, 2009, IEEE INT CONF ROBOT, P2509
Kondo T, 2004, ROBOT AUTON SYST, V46, P111, DOI 10.1016/j.robot.2003.11.006
Loch J., 1998, P 15 INT C MACH LEAR, P323
Matsubara T, 2007, IEEE INT CONF ROBOT, P2688, DOI 10.1109/ROBOT.2007.363871
Mcgovern A., 2001, 18 INT C MACH LEARN, P361
MENACHE I, 2002, ECML 02, P295
Miyamoto H, 2004, NEURAL NETWORKS, V17, P299, DOI 10.1016/j.neunet.2003.11.004
Moore AW, 1995, MACH LEARN, V21, P199, DOI 10.1023/A:1022656217772
Morimoto J, 2001, ROBOT AUTON SYST, V36, P37, DOI 10.1016/S0921-8890(01)00113-0
Morimoto J., 1998, IEEE RSJ INT C INT R, P1721
Nakamura Y, 2007, NEURAL NETWORKS, V20, P723, DOI 10.1016/j.neunet.2007.01.002
Peng J., 1994, INT C MACH LEARN, P226
Peters J., 2003, IEEE RAS INT C HUM R
Sato M, 2000, NEURAL COMPUT, V12, P407, DOI 10.1162/089976600300015853
Sedgewick R, 2011, ALGORITHMS
Stolle M., 2004, THESIS MCGILL U
Sutton RS, 1999, ARTIF INTELL, V112, P181, DOI 10.1016/S0004-3702(99)00052-1
Sutton RS, 1998, REINFORCEMENT LEARNI
Takahashi Y., 2003, P SICE ANN C 2003, P2937
Tham C. K., 1994, 11 INT C MACH LEARN, P309
Theodorou E, 2010, IEEE INT CONF ROBOT, P2397, DOI 10.1109/ROBOT.2010.5509336
Tsitsiklis JN, 1996, MACH LEARN, V22, P59, DOI 10.1007/BF00114724
Tsitsiklis JN, 1997, IEEE T AUTOMAT CONTR, V42, P674, DOI 10.1109/9.580874
Uchibe E., 2004, INT C SIM AD BEH AN, P287
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
Yamaguchi A., 2011, THESIS NARA I SCI TE
Zhang JW, 2004, ROBOT AUTON SYST, V47, P117, DOI 10.1016/j.robot.2004.03.006
NR 35
TC 6
Z9 7
U1 0
U2 9
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 0929-5593
EI 1573-7527
J9 AUTON ROBOT
JI Auton. Robot.
PD MAY
PY 2013
VL 34
IS 4
BP 327
EP 346
DI 10.1007/s10514-013-9328-1
PG 20
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA 123XB
UT WOS:000317424000005
DA 2018-01-22
ER

PT J
AU Nakanishi, Y
Ohta, S
Shirai, T
Asano, Y
Kozuki, T
Kakehashi, Y
Mizoguchi, H
Kurotobi, T
Motegi, Y
Sasabuchi, K
Urata, J
Okada, K
Mizuuchi, I
Inaba, M
AF Nakanishi, Yuto
Ohta, Shigeki
Shirai, Takuma
Asano, Yuki
Kozuki, Toyotaka
Kakehashi, Yuriko
Mizoguchi, Hironori
Kurotobi, Tomoko
Motegi, Yotaro
Sasabuchi, Kazuhiro
Urata, Junichi
Okada, Kei
Mizuuchi, Ikuo
Inaba, Masayuki
TI Design Approach of Biologically-Inspired Musculoskeletal Humanoids
Invited Paper
SO INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
LA English
DT Article
DE Musculo-Skeletal Humanoid; Bio-Inspired Robotics; Body Design
Methodology; Behavior Intelligence
ID WALKING; DRIVEN
AB In order to realize more natural and various motions like humans, humanlike
musculoskeletal tendon-driven humanoids have been studied. Especially, it is very
challenging to design musculoskeletal body structure which consists of complicated
bones, redundant powerful and flexible muscles, and large number of distributed
sensors. In addition, it is very challenging to reveal humanlike intelligence to
manage these complicated musculoskeletal body structure. This paper sums up life-
sized musculoskeletal humanoids Kenta, Kotaro, Kenzoh and Kenshiro which we have
developed so far, and describes key technologies to develop and control these
robots.
C1 [Nakanishi, Yuto; Ohta, Shigeki; Shirai, Takuma; Asano, Yuki; Kozuki, Toyotaka;
Kakehashi, Yuriko; Mizoguchi, Hironori; Kurotobi, Tomoko; Motegi, Yotaro;
Sasabuchi, Kazuhiro; Urata, Junichi; Okada, Kei; Inaba, Masayuki] Univ Tokyo, Dept
Mechanoinformat, Tokyo 1138654, Japan.
[Mizuuchi, Ikuo] Tokyo Univ Agr & Technol, Dept Mech Syst Engn, Fuchu, Tokyo
183, Japan.
RP Nakanishi, Y (reprint author), Univ Tokyo, Dept Mechanoinformat, Tokyo 1138654,
Japan.
EM nakanish@jsk.t.u-tokyo.ac.jp
RI Mizuuchi, Ikuo/B-9946-2013
CR Appolinaire C., 2011, BIOINSPIRED CONDYLAR, P4042
Bischoff Rainer, 2010, INT S ROB I IN PRESS
Hongo Kazuo, 2009, P 18 IEEE INT S ROB, P897
Hongo Kazuo, 2007, P 25 ANN C ROB SOC J, p3H26
Hongo K, 2008, IEEE-RAS INT C HUMAN, P16, DOI 10.1109/ICHR.2008.4755925
Inaba M, 2003, SPRINGER TRAC ADV RO, V6, P113
Inaba M, 1993, P 1993 INT S ROB RES
Inaba M., 1997, 8 INT S ROB RES, P155
Ryosuke KAGEYAMA, 2000, ROBOMECH 00, V2A1-54-068
Kanehiro Fumio, 1997, P ANN C ROB SOC JAP, P1023
Kaneko K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P2431, DOI 10.1109/IRDS.2002.1041632
Kaneko K, 2011, IEEE INT C INT ROBOT, P4400, DOI 10.1109/IROS.2011.6048074
Kapandji I. A., 1986, PHYSL JOINTS
Masahiko Masahiko Osada, 2010, INT C HUM R IN PRESS
Akihiko MIYADERA, 2005, P 23 ANN C ROB SOC J, p1E35
Mizuuchi I, 2003, IEEE INT CONF ROBOT, P1940, DOI 10.1109/ROBOT.2003.1241878
Mizuuchi I, 2003, ADV ROBOTICS, V17, P179, DOI 10.1163/156855303321165123
Mizuuchi I, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2527, DOI 10.1109/IRDS.2002.1041649
MIZUUCHI I, 2005, P IEEE RAS INT C HUM, P339
Mizuuchi Ikuo, 2006, P 2006 6 IEEE RAS IN, P12
MIZUUCHI I, 2004, P IEEE INT C INT ROB, P828
Mizuuchi I., 2007, P INT C HUM ROB HUM, P12
Mizuuchi Ikuo, 1999, P 1999 JSME C ROB ME
Mizuuchi I, 2006, IEEE-RAS INT C HUMAN, P176, DOI 10.1109/ICHR.2006.321381
Nakanishi Yuto, 2007, P 2007 IEEE RSJ INT, P3623
Nakanishi Yuto, 2010, P 11 SICE SYST INT D, p1G2
Nakanishi Yuto, 2008, P 2008 IEEE RSJ INT, P205
NAKANISHI Y, 2005, P INT C HUM ROB HUM
Yuto NAKANISHI, 2008, ROBOMEC 08, V6, p2A1
Nakanishi Y, 2012, IEEE INT C INT ROBOT, P1815, DOI 10.1109/IROS.2012.6386220
Nakanishi Y, 2010, IEEE INT CONF ROBOT, P1727, DOI 10.1109/ROBOT.2010.5509437
Narioka K, 2008, ADV ROBOTICS, V22, P1107, DOI 10.1163/156855308X324811
Rusu R. B., 2009, INT C ADV ROB ICAR M
SODEYAMA Y, 2005, P 2005 IEEE RSJ INT, P1077
SODEYAMA Y, 2007, P 2007 IEEE RSJ INT, P3629
Urata Junichi, 2008, P 2008 IEEE RSJ INT, P2047
URATA J, 2006, P 2006 IEEE INT C RO
Yoshida Shigenori, 2002, ROBOMEC 02, p2P2
NR 38
TC 10
Z9 10
U1 0
U2 17
PU SAGE PUBLICATIONS INC
PI THOUSAND OAKS
PA 2455 TELLER RD, THOUSAND OAKS, CA 91320 USA
SN 1729-8814
J9 INT J ADV ROBOT SYST
JI Int. J. Adv. Robot. Syst.
PD APR 29
PY 2013
VL 10
AR 216
DI 10.5772/55443
PG 13
WC Robotics
SC Robotics
GA 134OC
UT WOS:000318219900001
OA gold
DA 2018-01-22
ER

PT J
AU Limon, RC
Zannatha, JMI
Rodriguez, MAA
AF Limon, Rafael Cisneros
Ibarra Zannatha, Juan Manuel
Armada Rodriguez, Manuel Angel
TI Inverse Kinematics of a Humanoid Robot with Non-Spherical Hip: A Hybrid
Algorithm Approach
SO INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
LA English
DT Article
DE Humanoid Robots; Inverse Kinematics; Non-Spherical Hip
ID MANIPULATORS
AB This paper describes an approach to solve the inverse kinematics problem of
humanoid robots whose construction shows a small but non negligible offset at the
hip which prevents any purely analytical solution to be developed. Knowing that a
purely numerical solution is not feasible due to variable efficiency problems, the
proposed one first neglects the offset presence in order to obtain an approximate
"solution" by means of an analytical algorithm based on screw theory, and then uses
it as the initial condition of a numerical refining procedure based on the
Levenberg-Marquardt algorithm. In this way, few iterations are needed for any
specified attitude, making it possible to implement the algorithm for real-time
applications. As a way to show the algorithm's implementation, one case of study is
considered throughout the paper, represented by the SILO2 humanoid robot.
C1 [Limon, Rafael Cisneros] Univ Tsukuba, Tsukuba, Ibaraki, Japan.
[Limon, Rafael Cisneros] CNRS AIST, JRL, UMI3218 CRT, Tsukuba, Ibaraki, Japan.
[Ibarra Zannatha, Juan Manuel] CINVESTAV, Dept Automat Control, Mexico City
14000, DF, Mexico.
[Armada Rodriguez, Manuel Angel] CSIC UPM, Ctr Automat & Robot CAR, Madrid,
Spain.
RP Limon, RC (reprint author), Univ Tsukuba, Tsukuba, Ibaraki, Japan.
EM rafael.cisneros@aist.go.jp
RI Cisneros Limon, Rafael/M-5363-2016
OI Cisneros Limon, Rafael/0000-0002-0850-070X
CR Al-Faiz M. Z., 2011, INT J ROBOT AUTOM, V2, P256
Ali MA, 2010, IEEE INT C INT ROBOT, P704, DOI 10.1109/IROS.2010.5649842
Arbulu Mario, 2005, P 5 IEEE RAS INT C H, P271
Armada M., 2002, P IARP WORKSH HUM FR, P37
Bingul Z., 2005, P ICSC C COMP INT ME
Buss S. R., 2004, TECHNICAL REPORT
Muller Cajar R., 2007, P 22 INT IM VIS COMP, P181
Canutescu AA, 2003, PROTEIN SCI, V12, P963, DOI 10.1110/ps.0242703
DELOPE J, 2003, LNCS, V2687, P177
Montes Franceschi Hector, 2006, THESIS U COMPLUTENSE
Ibarra Juan Manuel, 2009, P 19 IEEE INT C EL C, P111
MANOCHA D, 1994, IEEE T ROBOTIC AUTOM, V10, P648, DOI 10.1109/70.326569
Mistryl Michael, 2009, P 8 IEEE RAS INT C H, P22
Murray R. M., 1994, MATH INTRO ROBOTIC M
Parikh PJ, 2005, IEEE T ROBOT, V21, P18, DOI 10.1109/TRO.2004.833801
Pashkevich A, 1997, CONTROL ENG PRACT, V5, P1443, DOI 10.1016/S0967-
0661(97)00142-1
Pieper D.L., 1968, THESIS STANFORD U CA
Pitt J., 2008, P 8 IEEE RAS INT C H, P681
Singh GK, 2010, IEEE INT C INT ROBOT, P2976, DOI 10.1109/IROS.2010.5649095
SPONG MW, 1989, ROBOT DYNAMICS CONTR, pCH3
Sugihara T, 2011, IEEE T ROBOT, V27, P984, DOI 10.1109/TRO.2011.2148230
TEVATIA G, 2000, P IEEE INT C ROB AUT, V1, P294, DOI DOI
10.1109/ROBOT.2000.844073
Tolani D, 2000, GRAPH MODELS, V62, P353, DOI 10.1006/gmod.2000.0528
WANG LCT, 1991, IEEE T ROBOTIC AUTOM, V7, P489, DOI 10.1109/70.86079
Williams R. L. II, 1999, International Journal of Robotics & Automation, V14,
P1
NR 25
TC 3
Z9 3
U1 1
U2 14
PU INTECH EUROPE
PI RIJEKA
PA JANEZA TRDINE 9, RIJEKA, 51000, CROATIA
SN 1729-8806
EI 1729-8814
J9 INT J ADV ROBOT SYST
JI Int. J. Adv. Robot. Syst.
PD APR 25
PY 2013
VL 10
AR 213
DI 10.5772/55464
PG 13
WC Robotics
SC Robotics
GA 132KN
UT WOS:000318066600002
OA gold
DA 2018-01-22
ER

PT J
AU Saab, L
Ramos, OE
Keith, F
Mansard, N
Soueres, P
Fourquet, JY
AF Saab, Layale
Ramos, Oscar E.
Keith, Francois
Mansard, Nicolas
Soueres, Philippe
Fourquet, Jean-Yves
TI Dynamic Whole-Body Motion Generation Under Rigid Contacts and Other
Unilateral Constraints
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Contact modeling; dynamics; force control; humanoid robotics; redundant
robots
ID OPERATIONAL SPACE FORMULATION; ZERO-MOMENT POINT; ROBOT MANIPULATORS;
HUMANOID ROBOT; FORCE CONTROL; FRAMEWORK; TASKS
AB The most widely used technique for generating whole-body motions on a humanoid
robot accounting for various tasks and constraints is inverse kinematics. Based on
the task-function approach, this class of methods enables the coordination of robot
movements to execute several tasks in parallel and account for the sensor feedback
in real time, thanks to the low computation cost. To some extent, it also enables
us to deal with some of the robot constraints (e. g., joint limits or visibility)
and manage the quasi-static balance of the robot. In order to fully use the whole
range of possible motions, this paper proposes extending the task-function approach
to handle the full dynamics of the robot multibody along with any constraint
written as equality or inequality of the state and control variables. The
definition of multiple objectives is made possible by ordering them inside a strict
hierarchy. Several models of contact with the environment can be implemented in the
framework. We propose a reduced formulation of the multiple rigid planar contact
that keeps a low computation cost. The efficiency of this approach is illustrated
by presenting several multicontact dynamic motions in simulation and on the real
HRP-2 robot.
C1 [Saab, Layale; Ramos, Oscar E.] Univ Toulouse, CNRS, Lab Anal & Architecture
Syst, F-31400 Toulouse, France.
[Keith, Francois] Lab Informat Robot & Microelect Montpellier, F-34095
Montpellier, France.
[Keith, Francois] CNRS AIST Joint Robot Lab, CRT UMI3218, Tsukuba, Ibaraki,
Japan.
[Fourquet, Jean-Yves] Univ Toulouse, Lab Genie Prod, Ecole Natl Ingn, F-65016
Tarbes, France.
RP Saab, L (reprint author), EOS Innovat, F-91000 Evry, France.
EM layalesaab@gmail.com; oscarefrain@gmail.com; francois.keith@lirmm.fr;
nicolas.mansard@laas.fr; soueres@laas.fr; fourquet@enit.fr
CR Audin M., 2002, GEOMETRY, P29
Baerlocher P, 2004, VISUAL COMPUT, V20, P402, DOI 10.1007/s00371-004-0244-4
Bauchau OA, 2008, J COMPUT NONLIN DYN, V3, DOI 10.1115/1.2397690
Ben-Israel A., 2003, CMS BOOKS MATH
Berenson D, 2011, INT J ROBOT RES, V30, P1435, DOI 10.1177/0278364910396389
Bonnet V, 2008, IEEE-RAS INT C HUMAN, P61, DOI 10.1109/ICHR.2008.4755932
Bouyarmane K., 2011, P IEEE INT C ROB AUT, P5246
Bouyarmane K., 2009, P IEEE INT C ROB AUT, P1165
Boyd S., 2004, CONVEX OPTIMIZATION
Boyd SP, 2007, IEEE T ROBOT, V23, P1117, DOI 10.1109/TRO.2007.910774
Bruyninckx H., 2000, ICRA 00 IEEE INT C R, V3, P2563
CHAN TF, 1995, IEEE T ROBOTIC AUTOM, V11, P286, DOI 10.1109/70.370511
Chang PH, 2012, IEEE T ROBOT, V28, P773, DOI 10.1109/TRO.2012.2187397
Chaumette F, 2007, IEEE ROBOT AUTOM MAG, V14, P109, DOI 10.1109/MRA.2007.339609
Chevallereau C, 1988, IEEE INT C ROB AUT U, V3, P37
Chiaverini S, 1997, IEEE T ROBOTIC AUTOM, V13, P398, DOI 10.1109/70.585902
Collette C., 2007, IEEE RAS INT C HUM R
Dalibard S., 2009, P IEEE RAS INT C HUM, P355
DEO AS, 1992, 1992 IEEE INTERNATIONAL CONF ON ROBOTICS AND AUTOMATION :
PROCEEDINGS, VOLS 1-3, P434, DOI 10.1109/ROBOT.1992.220301
DOTY KL, 1993, INT J ROBOT RES, V12, P1, DOI 10.1177/027836499301200101
Escande A., 2008, INT S EXP ROB ATH GR
Escande A., 2010, P IEEE INT C ROB AUT, P3733
Escande A, 2007, IEEE-RAS INT C HUMAN, P188, DOI 10.1109/ICHR.2007.4813867
ESPIAU B, 1992, IEEE T ROBOTIC AUTOM, V8, P313, DOI 10.1109/70.143350
Evrard P, 2008, 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, VOLS 1 AND 2, P15, DOI 10.1109/ROMAN.2008.4600636
Featherstone R., 2008, RIGID BODY DYNAMICS
Ferrari C., 1992, P IEEE INT C ROB AUT, V3, P2290, DOI DOI
10.1109/R0B0T.1992.219918
Fletcher R., 1971, IMA J APPL MATH, V7, P76
Golub G. H., 1996, MATRIX COMPUTATIONS
Guizzo E., 2010, IEEE SPECTR AUT 1213
Hanafusa H., 1981, IFAC 8 TRIENN WORLD, V4, P1927
Harada K, 2006, IEEE T ROBOT, V22, P568, DOI 10.1109/TRO.2006.870649
Herdt A, 2010, ADV ROBOTICS, V24, P719, DOI 10.1163/016918610X493552
Hirukawa H, 2006, IEEE INT CONF ROBOT, P1976, DOI 10.1109/ROBOT.2006.1641995
Hofmann A., 2009, P IEEE INT C ROB AUT, P4423
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kajita S., 2009, INTRO COMMANDE ROBOT
Kanoun O., 2009, P IEEE INT C ROB AUT, P2939
Kavraki LE, 1998, IEEE T ROBOTIC AUTOM, V14, P166, DOI 10.1109/70.660866
KHATIB O, 1986, INT J ROBOT RES, V5, P90, DOI 10.1177/027836498600500106
KHATIB O, 1987, IEEE T ROBOTIC AUTOM, V3, P43, DOI 10.1109/JRA.1987.1087068
Khatib O., 2004, INT J HUM ROBOT, V1, P29, DOI [10.1142/S0219843604000058, DOI
10.1142/S0219843604000058]
Khatib O, 2008, SPRINGER TRAC ADV RO, V44, P303
Kuffner JJ, 2000, P 2000 IEEE INT C RO, V2, P995, DOI DOI
10.1109/R0B0T.2000.844730
Laulusa A., 2008, J COMPUT NONLIN DYN, V3
LIEGEOIS A, 1977, IEEE T SYST MAN CYB, V7, P868
Ma Y, 2004, INVITATION 3 D VISIO, V26
Maes C., 2010, THESIS STANFORD U ST
Mansard N., 2009, P IEEE INT C ADV ROB, P1
Mansard N., 2012, P IEEE INT C ROB AUT, P4943
Mansard N, 2007, IEEE T ROBOT, V23, P60, DOI 10.1109/TRO.2006.889487
Mansard N, 2009, IEEE T ROBOT, V25, P670, DOI 10.1109/TRO.2009.2020345
Marchand E., 1998, IEEE RSJ INT C INT R
Moore E.H., 1920, B AM MATH SOC, V26, P394
NAKAMURA Y, 1986, J DYN SYST-T ASME, V108, P163, DOI 10.1115/1.3143764
Nakamura Y, 1991, ADV ROBOTICS REDUNDA
Nakanishi J, 2008, INT J ROBOT RES, V27, P737, DOI 10.1177/0278364908091463
Ott C., 2010, P IEEE RAS INT C HUM, P167
Pang JS, 2000, Z ANGEW MATH MECH, V80, P643, DOI 10.1002/1521-
4001(200010)80:10<643::AID-ZAMM643>3.0.CO;2-E
Park J., 2006, THESIS STANFORD U ST
Park J, 2006, INT J ROBOT RES, V25, P575, DOI 10.1177/0278364906065385
Park J, 2006, IEEE INT CONF ROBOT, P1963, DOI 10.1109/ROBOT.2006.1641993
Peters J, 2008, AUTON ROBOT, V24, P1, DOI 10.1007/s10514-007-9051-x
Raunhardt D, 2007, IEEE INT CONF ROBOT, P4414, DOI 10.1109/ROBOT.2007.364159
ROSEN JB, 1960, SIAM J APPL MATH, V8, P181, DOI DOI 10.1137/0108011
Saab L, 2011, IEEE INT C INT ROBOT, P4127, DOI 10.1109/IROS.2011.6048402
Saab L., 2011, P IEEE INT C ROB AUT, P1091
Samson C., 1991, ROBOT CONTROL TASK F
Sardain P, 2004, IEEE T SYST MAN CY A, V34, P630, DOI 10.1109/TSMCA.2004.832811
Sentis L., 2007, THESIS STANFORD U ST
Sentis L, 2010, IEEE T ROBOT, V26, P483, DOI 10.1109/TRO.2010.2043757
Siciliano B, 1991, P IEEE INT C ADV ROB, V2, P1211, DOI DOI
10.1109/ICAR.1991.240390
Sugihara T, 2009, IEEE T ROBOT, V25, P658, DOI 10.1109/TRO.2008.2012336
UDWADIA FE, 1992, P ROY SOC LOND A MAT, V439, P407, DOI 10.1098/rspa.1992.0158
Varkonyi P., 2012, P IEEE INT C ROB AUT, P63
VUKOBRAT.M, 1969, IEEE T BIO-MED ENG, VBM16, P1, DOI 10.1109/TBME.1969.4502596
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Youcef-Toumi K., 1988, Proceedings of the 1988 American Control Conference, P904
NR 78
TC 60
Z9 60
U1 1
U2 25
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD APR
PY 2013
VL 29
IS 2
BP 346
EP 362
DI 10.1109/TRO.2012.2234351
PG 17
WC Robotics
SC Robotics
GA 124US
UT WOS:000317493900005
DA 2018-01-22
ER

PT J
AU Morishita, T
Tojo, O
AF Morishita, Takeshi
Tojo, Osamu
TI Integer inverse kinematics method using Fuzzy logic
SO INTELLIGENT SERVICE ROBOTICS
LA English
DT Article
DE Inverse kinematics; Integer algorithm; Reduction control; Fuzzy model;
Multijoint control model
AB In this paper, we propose an integer inverse kinematics method for multijoint
robot control. The method reduces computational overheads and leads to the
development of a simple control system as the use of fuzzy logic enables linguistic
modeling of the joint angle. A small humanoid robot is used to confirm via
experiment that the method produces the same cycling movements in the robot as
those in a human. In addition, we achieve fast information sharing by implementing
the all-integer control algorithm in a low-cost, low-power microprocessor.
Moreover, we evaluate the ability of this method for trajectory generation and
confirm that target trajectories are reproduced well. The computational results of
the general inverse kinematics model are compared to those of the integer inverse
kinematics model and similar outputs are demonstrated. We show that the integer
inverse kinematics model simplifies the control process.
C1 [Morishita, Takeshi; Tojo, Osamu] Toin Univ Yokohama, Dept Robot, Yokohama,
Kanagawa 2258502, Japan.
RP Morishita, T (reprint author), Toin Univ Yokohama, Dept Robot, Yokohama,
Kanagawa 2258502, Japan.
EM morishita@cc.toin.ac.jp; tm18i03s@ust.toin.ac.jp
FU [21500844]
FX This work was partly supported by a Grant-in-Aid for Scientific Research
(C) (21500844).
CR Alavandar S, 2008, INT J COMPUT COMMUN, V3, P224, DOI 10.15837/ijccc.2008.3.2391
Banga V. K., 2011, INT J PHYS SCI, V6, P204
Her MG, 2002, INT J ADV MANUF TECH, V20, P375
KIM SW, 1993, IROS 93 : PROCEEDINGS OF THE 1993 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOL 1-3, P904, DOI
10.1109/IROS.1993.583249
KUMBLA KK, 1994, PROCEEDINGS OF THE THIRD IEEE CONFERENCE ON FUZZY SYSTEMS -
IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE, VOLS I-III, P518, DOI
10.1109/FUZZY.1994.343731
Ohkubo Y, 2004, P 2004 IEEE RSJ INT, V4, P3553, DOI [10.1109/IROS.2004.1389966,
DOI 10.1109/IROS.2004.1389966]
Piltan F, 2011, INT J ENG, V5, P399
Sugeno M., 1988, FUZZY CONTROL
Tojo O, 2010, P 7 INT C UB ROB AMB, P233
Tojo O, 2009, P 6 INT C UB ROB AMB, P402
UMEDA K, 2005, J ROBOTICS SOC JAPAN, V23, P112
Urata J, 2006, P 23 ANN C ROB SOC J, V23, p3E31
NR 12
TC 2
Z9 3
U1 0
U2 0
PU SPRINGER HEIDELBERG
PI HEIDELBERG
PA TIERGARTENSTRASSE 17, D-69121 HEIDELBERG, GERMANY
SN 1861-2776
EI 1861-2784
J9 INTEL SERV ROBOT
JI Intell. Serv. Robot.
PD APR
PY 2013
VL 6
IS 2
BP 101
EP 108
DI 10.1007/s11370-012-0115-1
PG 8
WC Robotics
SC Robotics
GA AI4QF
UT WOS:000336849200004
OA gold
DA 2018-01-22
ER

PT J
AU Jung, JY
Kanda, T
Kim, MS
AF Jung, Jinyung
Kanda, Takayuki
Kim, Myung-Suk
TI Guidelines for Contextual Motion Design of a Humanoid Robot
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Contextual motion design; Guideline; Humanoid robot
ID SOCIAL INTERACTIONS; EXPRESSION; ANIMATION
AB The motion of a humanoid robot is one of the most intuitive communication
channels for human-robot interaction. Previous studies have presented related
knowledge to generate speech-based motions of virtual agents on screens. However,
physical humanoid robots share time and space with people, and thus, numerous
speechless situations occur where the robot cannot be hidden from users. Therefore,
we must understand the appropriate roles of motion design for a humanoid robot in
many different situations. We achieved the target knowledge as motion-design
guidelines based on the iterative findings of design case studies and a literature
review. The guidelines are largely separated into two main roles for speech-based
and speechless situations, and the latter can be further subdivided into idling,
observing, listening, expecting, and mood-setting, all of which are well-
distributed by different levels of intension. A series of experiments proved that
our guidelines help create preferable motion designs of a humanoid robot. This
study provides researchers with a balanced perspective between speech-based and
speechless situations, and thus they can design the motions of a humanoid robot to
satisfy users in more acceptable and pleasurable ways.
C1 [Jung, Jinyung; Kim, Myung-Suk] Korea Adv Inst Sci & Technol, Dept Ind Design,
Taejon 305701, South Korea.
[Kanda, Takayuki] Adv Telecommun Inst Int, Intelligent Robot & Commun Lab,
Seika, Kyoto 6190288, Japan.
RP Jung, JY (reprint author), Korea Adv Inst Sci & Technol, Dept Ind Design,
Guseong Dong 373-1, Taejon 305701, South Korea.
EM monagi@kaist.ac.kr
RI Kim, Myung-suk/C-1904-2011; Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825
FU Internship Program of Brain Korea 21; KAKENHI [21118008]
FX This work was supported by the Internship Program of Brain Korea 21 and
in part supported by a Grant-in Aid for Scientific Research in Japan,
KAKENHI (21118008).
CR Argyle M., 1975, BODILY COMMUNICATION
Boudreau D, 2002, ASU RES MAGAZINE SUM, P40
Breazeal C, 2000, ADAPT BEHAV, V8, P49, DOI 10.1177/105971230000800104
Breazeal C, 2004, IEEE T SYST MAN CY C, V34, P181, DOI 10.1109/TSMCC.2004.826268
Breazeal C, 2005, P IEEE RSJ INT C INT, P708, DOI DOI 10.1109/IROS.2005.1545011
Breemen V, 2004, P SHAP HUM ROB INT W
Brooks A. G., 2006, AUTON ROBOT, V22, P55, DOI [10.1007/s10514-006-9005-8, DOI
10.1007/S10514-006-9005-8]
Cassell J, 2001, COMP GRAPH, P477
Cassell J., 1994, P SIGGRAPH 94, P413
Duffy BR, 2003, ROBOT AUTON SYST, V42, P177, DOI 10.1016/S0921-8890(02)00374-3
Egges A, 2004, 12TH PACIFIC CONFERENCE ON COMPUTER GRAPHICS AND APPLICATIONS,
PROCEEDINGS, P121, DOI 10.1109/PCCGA.2004.1348342
Ellenberg R, 2009, P INT C HYBR INF TEC, P222
Foner L, 1997, 9301 AG GROUP MIT ME, P199
Fong T, 2003, ROBOT AUTON SYST, V42, P143, DOI 10.1016/S0921-8890(02)00372-X
Goffman E, 1963, BEHAV PUBLIC SPACE
HALL Edward T., 1966, HIDDEN DIMENSION
Hasanuzzaman M, 2006, IND ROBOT, V33, P37, DOI 10.1108/01439910610638216
Imai M, 2003, IEEE T IND ELECTRON, V50, P636, DOI 10.1109/TIE.2003.814769
Kendon A., 1988, SIGN LANGUAGES ABORI
Kipp M, 2007, LECT NOTES ARTIF INT, V4722, P15
Knapp M. L., 1997, NONVERBAL COMMUNICAT
Kopp S, 2000, FR ART INT, V54, P663
Kozima H, 2001, ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, P377, DOI
10.1109/ROMAN.2001.981933
Lee J., 2009, TECHNICAL REPORT
Lee J, 2008, 1 FUNCT MARK LANG WO
Lee J, 2006, LECT NOTES ARTIF INT, V4133, P243
Machotka P, 1982, ARTICULATE BODY
Mali A, 2002, TECHNICAL REPORT
Matsusaka Y, 2008, LECT NOTES ARTIF INT, V4930, P109
Dunham P., 1995, JOINT ATTENTION ITS
Nakadai K., 2001, P INT JOINT C ART IN, P1425
Nehaniv C., 2005, P AISB 05 S ROB COMP, P74
Ogasawara Y, 2005, P S CONV INF SUPP SO, P42
Pelachaud C, 2005, P 13 ANN ACM INT C M, P683
Poyatos F., 1976, MAN WORDS THEORY MET
Ribeiro T., 2012, P 7 ANN ACM IEEE INT, P383
Saerbeck M, 2007, P 16 IEEE INT S ROB, P386
Sakamoto D, 2005, INT J HUM-COMPUT ST, V62, P247, DOI
10.1016/j.ijhcs.2004.11.001
Salvini P, 2010, INT J SOC ROBOT, V2, P451, DOI 10.1007/s12369-010-0079-2
Suzuki K, 2002, J ROBOTICS MECHATRON, V14, P497
THOMAS F, 1981, ILLUSION LIFE
Wolf TV, 2006, P SIGCHI C HUM FACT, P521, DOI DOI 10.1145/1124772.1124853
Young JE, 2011, INT J SOC ROBOT, V3, P53, DOI 10.1007/s12369-010-0081-8
NR 43
TC 4
Z9 4
U1 0
U2 3
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD APR
PY 2013
VL 5
IS 2
BP 153
EP 169
DI 10.1007/s12369-012-0175-6
PG 17
WC Robotics
SC Robotics
GA AE1VM
UT WOS:000333759000001
DA 2018-01-22
ER

PT J
AU Shen, FR
Ouyang, QB
Kasai, W
Hasegawa, O
AF Shen, Furao
Ouyang, Qiubao
Kasai, Wataru
Hasegawa, Osamu
TI A general associative memory based on self-organizing incremental neural
network
SO NEUROCOMPUTING
LA English
DT Article
DE General associative memory; Incremental learning; Temporal sequence;
Real-value data; Many-to-many association
ID VECTOR QUANTIZATION; RECOGNITION; DESIGN; MODEL
AB This paper proposes a general associative memory (GAM) system that combines the
functions of other typical associative memory (AM) systems. The GAM is a network
consisting of three layers: an input layer, a memory layer, and an associative
layer. The input layer accepts key vectors, response vectors, and the associative
relationships between these vectors. The memory layer stores the input vectors
incrementally to corresponding classes. The associative layer builds associative
relationships between classes. The GAM can store and recall binary or non-binary
information, learn key vectors and response vectors incrementally, realize many-to-
many associations with no predefined conditions, store and recall both static and
temporal sequence information, and recall information from incomplete or noise-
polluted inputs. Experiments using binary data, real-value data, and temporal
sequences show that GAM is an efficient system. The AM experiments using a humanoid
robot demonstrates that GAM can accommodate real tasks and build associations
between patterns with different dimensions. (C) 2012 Elsevier B.V. All rights
reserved.
C1 [Shen, Furao; Ouyang, Qiubao] Nanjing Univ, Natl Key Lab Novel Software Technol,
Nanjing, Peoples R China.
[Kasai, Wataru; Hasegawa, Osamu] Tokyo Inst Technol, Imaging Sci & Engn Lab,
Tokyo, Japan.
RP Shen, FR (reprint author), Nanjing Univ, Natl Key Lab Novel Software Technol,
Nanjing, Peoples R China.
EM frshen@nju.edu.cn; njuxingkong@gmail.com; kasai.w.aa@m.titech.ac.jp;
hasegawa@isl.titech.ac.jp
RI Shen, Furao/O-6206-2015
FU 973 Program [2010CB327903]; National Natural Science Foundation of China
[60975047]; Jiangsu NSF [BK2009080, BK2011567]
FX This work was supported in part by the 973 Program 2010CB327903, the
Fund of the National Natural Science Foundation of China 60975047, and
Jiangsu NSF grant BK2009080, BK2011567.
CR Casali D, 2006, IEEE T NEURAL NETWOR, V17, P1165, DOI 10.1109/TNN.2006.877539
Barreto G. de A., 2000, P IEEE INNS ENNS INT
FRENCH RM, 1991, PROGRAM OF THE THIRTEENTH ANNUAL CONFERENCE OF THE COGNITIVE
SCIENCE SOCIETY, P173
Fritzke B., 1995, Advances in Neural Information Processing Systems 7, P625
Georghiades AS, 2001, IEEE T PATTERN ANAL, V23, P643, DOI 10.1109/34.927464
Hagiwara M., 1990, INT JOINT C NEUR NET, V1, P3
HARNAD S, 1990, PHYSICA D, V42, P335, DOI 10.1016/0167-2789(90)90087-6
Hattori M, 1996, NEUROCOMPUTING, V12, P1, DOI 10.1016/0925-2312(95)00022-4
Haykin S., 1994, NEURAL NETWORKS COMP
HOPFIELD JJ, 1982, P NATL ACAD SCI-BIOL, V79, P2554, DOI 10.1073/pnas.79.8.2554
Ikeda N, 2001, NEURAL NETWORKS, V14, P1189, DOI 10.1016/S0893-6080(01)00089-2
Itoh K, 2005, NEURAL NETWORKS, V18, P666, DOI 10.1016/j.neunet.2005.06.021
KOHONEN T, 1982, BIOL CYBERN, V43, P59, DOI 10.1007/BF00337288
Kohonon T., 1984, SELF ORG ASS MEMORY
KOSKO B, 1988, IEEE T SYST MAN CYB, V18, P49, DOI 10.1109/21.87054
LINDE Y, 1980, IEEE T COMMUN, V28, P84, DOI 10.1109/TCOM.1980.1094577
LLOYD SP, 1982, IEEE T INFORM THEORY, V28, P2
MacQueen J., 1967, P 5 BERK S MATH STAT, V1, P281, DOI DOI 10.1234/12345678
MARTINETZ TM, 1993, IEEE T NEURAL NETWOR, V4, P558, DOI 10.1109/72.238311
OH H, 1994, IEEE T NEURAL NETWOR, V5, P576, DOI 10.1109/72.298227
Ozturk MC, 2007, NEURAL NETWORKS, V20, P377, DOI 10.1016/j.neunet.2007.04.012
Pfeifer R., 2001, UNDERSTANDING INTELL
Rizzuto DS, 2001, NEURAL COMPUT, V13, P2075, DOI 10.1162/089976601750399317
Sakurai N, 2002, IEEE IJCNN, P950, DOI 10.1109/IJCNN.2002.1005603
Shen F, 2006, NEURAL NETWORKS, V19, P694, DOI 10.1016/j.neunet.2005.05.001
Furao Shen, 2007, NEURAL NETWORKS, V20, P893
Shen FR, 2006, NEURAL NETWORKS, V19, P90, DOI 10.1016/j.neunet.2005.04.006
Sudo A, 2009, IEEE T NEURAL NETWOR, V20, P964, DOI 10.1109/TNN.2009.2014374
Sussner P, 2006, IEEE T NEURAL NETWOR, V17, P559, DOI 10.1109/TNN.2006.873280
Yamada T., 1999, P 1999 INT JOINT C N, P1920
Zhang B-L., 2006, J MULTIMEDIA, V1, P1
NR 31
TC 10
Z9 14
U1 2
U2 11
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0925-2312
J9 NEUROCOMPUTING
JI Neurocomputing
PD MAR 15
PY 2013
VL 104
BP 57
EP 71
DI 10.1016/j.neucom.2012.10.003
PG 15
WC Computer Science, Artificial Intelligence
SC Computer Science
GA 106RX
UT WOS:000316163500006
DA 2018-01-22
ER

PT J
AU Liu, CR
Ishi, CT
Ishiguro, H
Hagita, N
AF Liu, Chaoran
Ishi, Carlos T.
Ishiguro, Hiroshi
Hagita, Norihiro
TI GENERATION OF NODDING, HEAD TILTING AND GAZING FOR HUMAN-ROBOT SPEECH
INTERACTION
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Head motion; dialogue acts; gazing; motion generation
ID JOINT ATTENTION; MOTION; EXPRESSION; ANIMATION; DIRECTION
AB Head motion occurs naturally and in synchrony with speech during human dialogue
communication, and may carry paralinguistic information, such as intentions,
attitudes and emotions. Therefore, natural-looking head motion by a robot is
important for smooth human-robot interaction. Based on rules inferred from analyses
of the relationship between head motion and dialogue acts, this paper proposes a
model for generating head tilting and nodding, and evaluates the model using three
types of humanoid robot (a very human- like android, "Geminoid F", a typical
humanoid robot with less facial degrees of freedom, "Robovie R2", and a robot with
a 3-axis rotatable neck and movable lips, "Telenoid R2"). Analysis of subjective
scores shows that the proposed model including head tilting and nodding can
generate head motion with increased naturalness compared to nodding only or
directly mapping people's original motions without gaze information. We also find
that an upward motion of a robot's face can be used by robots which do not have a
mouth in order to provide the appearance that utterance is taking place. Finally,
we conduct an experiment in which participants act as visitors to an information
desk attended by robots. As a consequence, we verify that our generation model
performs equally to directly mapping people's original motions with gaze
information in terms of perceived naturalness.
C1 [Liu, Chaoran; Ishi, Carlos T.; Hagita, Norihiro] ATR Intelligent Robot & Commun
Labs, Kyoto 6190288, Japan.
[Ishiguro, Hiroshi] ATR Hiroshi Ishiguro Labs, Kyoto 6190288, Japan.
RP Liu, CR (reprint author), ATR Intelligent Robot & Commun Labs, Kyoto 6190288,
Japan.
EM chaoran.liu@atr.jp; carlos@atr.jp; ishiguro@sys.es.osaka-u.ac.jp;
hagita@atr.jp
FU JST CREST
FX This work was supported by JST CREST.
CR Beskow J, 2006, INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN
LANGUAGE PROCESSING, VOLS 1-5, P1272
Brooks AG, 2007, AUTON ROBOT, V22, P55, DOI 10.1007/s10514-006-9005-8
Busso C, 2007, IEEE T AUDIO SPEECH, V15, P1075, DOI 10.1109/TASL.2006.885910
DEBOER MM, 1979, CHILD DEV, V50, P1215, DOI 10.1111/j.1467-8624.1979.tb02487.x
Foster ME, 2007, LANG RESOUR EVAL, V41, P305, DOI 10.1007/s10579-007-9055-3
GRAF HP, 2002, IEEE C AUT FAC GEST, P381
Ishi CT, 2006, INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN
LANGUAGE PROCESSING, VOLS 1-5, P2006
ISHI CT, 2007, IEEE RSJ INT C INT R, P548
Ishi C. T., 2011, 11 INT C AUD VIS SPE, P131
Ishi C. T., 2010, ACM IEEE INT C HUM R, P293
Iwano Y, 1996, ICSLP 96 - FOURTH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE
PROCESSING, PROCEEDINGS, VOLS 1-4, P2167, DOI 10.1109/ICSLP.1996.607233
Kaplan F, 2006, INTERACT STUD, V7, P135, DOI 10.1075/is.7.2.04kap
Langton SRH, 2000, TRENDS COGN SCI, V4, P50, DOI 10.1016/S1364-6613(99)01436-9
Morency LP, 2007, ARTIF INTELL, V171, P568, DOI 10.1016/j.artint.2007.04.003
Munhall KG, 2004, PSYCHOL SCI, V15, P133, DOI 10.1111/j.0963-
7214.2004.01502010.x
Nagai Y, 2006, ADV ROBOTICS, V20, P1165, DOI 10.1163/156855306778522497
Sargin M. E., 2006, IEEE INT C MULT EXP, P893
Sidner C. L., 2006, ACM SIGCHI SIGART C, P290
Yehia HC, 2002, J PHONETICS, V30, P555, DOI 10.1006/jpho.2002.0165
NR 19
TC 3
Z9 3
U1 0
U2 9
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2013
VL 10
IS 1
AR 1350009
DI 10.1142/S0219843613500096
PG 19
WC Robotics
SC Robotics
GA 122IK
UT WOS:000317311600006
DA 2018-01-22
ER

PT J
AU Ratsamee, P
Mae, Y
Ohara, K
Takubo, T
Arai, T
AF Ratsamee, Photchara
Mae, Yasushi
Ohara, Kenichi
Takubo, Tomohito
Arai, Tatsuo
TI HUMAN-ROBOT COLLISION AVOIDANCE USING A MODIFIED SOCIAL FORCE MODEL WITH
BODY POSE AND FACE ORIENTATION
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Smooth collision avoidance; modified social force model; face
orientation estimation
ID PERSONAL-SPACE
AB The ability of robots to understand human characteristics and make themselves
socially accepted by humans are important issues if smooth collision avoidance
between humans and robots is to be achieved. When discussing smooth collision
avoidance, robot should understand not only physical components such as human
position, but also social components such as body pose, face orientation and
proxemics personal space during motion). We integrated these components in a
modified social force model MSFM) which allows robots to predict human motion and
perform smooth collision avoidance. In the modified model, short-term intended
direction is described by body pose, and a supplementary force related face
orientation is added for intention estimation. Face orientation is also the best
indication of the direction of personal space during motion, which was verified in
preliminary experiments. Our approach was implemented and tested on a real humanoid
robot in a situation in which a human is confronted with the robot in an indoor
environment. Experimental results showed that better human motion tracking was
achieved with body pose and face orientation tracking. Being provided with the face
orientation as an indication of the intended direction, and observing the laws of
proxemics in a human-like manner, the robot was able to perform avoidance motions
that were more human-like when compared to the original social force model SFM) in
a face-to-face confrontation.
C1 [Ratsamee, Photchara; Mae, Yasushi; Ohara, Kenichi; Arai, Tatsuo] Osaka Univ,
Grad Sch Engn Sci, Div Syst Sci & Appl Informat, Suita, Osaka 565, Japan.
[Takubo, Tomohito] Osaka City Univ, Grad Sch Engn, Dept Phys Elect & Informat,
Sumiyoshi Ku, Osaka 5588585, Japan.
RP Ratsamee, P (reprint author), Osaka Univ, Grad Sch Engn Sci, Div Syst Sci & Appl
Informat, Suita, Osaka 565, Japan.
EM ratsamee@arailab.sakura.ne.jp; mae@sys.es.osaka-u.ac.jp;
k-oohara@arai-lab.sys.es.osaka-u.ac.jp; takubo@info.eng.osaka-cu.ac.jp;
arai@sys.es.osaka-u.ac.jp
FU [23500242]
FX This work was supported in part by the Grant-in-Aid for Scientific
Research (C) 23500242.
CR ARGYLE M, 1965, SOCIOMETRY, V28, P289, DOI 10.2307/2786027
Argyle M, 1988, BODILY COMMUNICATION
Bar-Shalom Y., 1995, MULTITARGET MULTISEN
BORENSTEIN J, 1989, IEEE T SYST MAN CYB, V19, P1179, DOI 10.1109/21.44033
Deb K, 2002, IEEE T EVOLUT COMPUT, V6, P182, DOI 10.1109/4235.996017
DEMENTHON DF, 1995, INT J COMPUT VISION, V15, P123, DOI 10.1007/BF01450852
Ess A, 2009, IEEE T PATTERN ANAL, V31, P1831, DOI 10.1109/TPAMI.2009.109
Gerin-Lajoie M, 2005, MOTOR CONTROL, V9, P242, DOI 10.1123/mcj.9.3.242
HALL Edward T., 1966, HIDDEN DIMENSION
HELBING D, 1995, PHYS REV E, V51, P4282, DOI 10.1103/PhysRevE.51.4282
Kaya N, 1999, J ENVIRON PSYCHOL, V19, P183, DOI 10.1006/jevp.1999.0125
Lam C. P., 2011, IEEE T ROBOT, V27, P1
Luber M., 2011, IEEE RSJ INT C INT R
Luber M, 2010, IEEE INT CONF ROBOT, P464, DOI 10.1109/ROBOT.2010.5509779
Muller J., 2008, P INT C COGN SYST KA
Pellegrini S, 2009, IEEE I CONF COMP VIS, P261, DOI 10.1109/ICCV.2009.5459260
Ratsamee P., 2011, P M IM REC UND MIRU2, P290
Ratsamee P, 2012, P INT C HUM ROB INT, P215
Ratsamee P., 2012, P INT C MECH AUT CHE
Ruddarraju R., 2003, P 5 INT C MULT INT, P227
Scandolo L., 2011, IEEE INT C ROB AUT I, P809
Shi J., 1994, P CVPR 1994, V1, P593, DOI DOI 10.1109/CVPR.1994.323794
Shotton J, 2011, PROC CVPR IEEE, P1297, DOI 10.1109/CVPR.2011.5995316
Sisbot EA, 2007, IEEE T ROBOT, V23, P874, DOI 10.1109/TRO.2007.904911
Spinello L., 2011, P IEEE RSJ INT C INT
Takayama L, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P5495, DOI 10.1109/IROS.2009.5354145
Tamura Y, 2010, IEEE INT C INT ROBOT, P3887, DOI 10.1109/IROS.2010.5649673
Viola P, 2004, INT J COMPUT VISION, V57, P137, DOI
10.1023/B:VISI.0000013087.49260.fb
Xiao J, 2003, INT J IMAG SYST TECH, V13, P85, DOI 10.1002/ima.10048
NR 29
TC 10
Z9 10
U1 1
U2 12
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2013
VL 10
IS 1
AR 1350008
DI 10.1142/S0219843613500084
PG 24
WC Robotics
SC Robotics
GA 122IK
UT WOS:000317311600005
DA 2018-01-22
ER

PT J
AU Trovato, G
Zecca, M
Kishi, T
Endo, N
Hashimoto, K
Takanishi, A
AF Trovato, Gabriele
Zecca, Massimiliano
Kishi, Tatsuhiro
Endo, Nobutsuna
Hashimoto, Kenji
Takanishi, Atsuo
TI GENERATION OF HUMANOID ROBOT'S FACIAL EXPRESSIONS FOR CONTEXT-AWARE
COMMUNICATION
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Facial expressions; humanoid robotics; emotion recognition
ID EMOTION; DESIGN; VOICE; HEAD; FACE
AB Communication between humans and robots is a very important aspect in the field
of Humanoid Robotics. For a natural interaction, robots capable of nonverbal
communication must be developed. However, despite the most recent efforts, robots
still can show only limited expression capabilities. The purpose of this work is to
create a facial expression generator that can be applied to the 24 DoF head of the
humanoid robot KOBIAN-R. In this manuscript, we present a system that based on
relevant studies of human communication and facial anatomy can produce thousands of
combinations of facial and neck movements. The wide range of expressions covers not
only primary emotions, but also complex or blended ones, as well as communication
acts that are not strictly categorized as emotions. Results showed that the
recognition rate of expressions produced by this system is comparable to the rate
of recognition of the most common facial expressions. Context-based recognition,
which is especially important in case of more complex communication acts, was also
evaluated. Results proved that produced robotic expressions can alter the meaning
of a sentence in the same way as human expressions do. We conclude that our system
can successfully improve the communication abilities of KOBIAN-R, making it capable
of complex interaction in the future.
C1 [Trovato, Gabriele] Waseda Univ, Grad Sch Sci & Engn, Shinjuku Ku, Tokyo
1620044, Japan.
[Zecca, Massimiliano; Hashimoto, Kenji] Waseda Univ, Fac Sci & Engn, Tokyo
1620044, Japan.
[Zecca, Massimiliano; Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst, Tokyo
1620044, Japan.
[Kishi, Tatsuhiro; Endo, Nobutsuna] Waseda Univ, Grad Sch Sci & Engn, Tokyo
1620044, Japan.
[Takanishi, Atsuo] Waseda Univ, Dept Modern Mech Engn, Tokyo 1620044, Japan.
RP Trovato, G (reprint author), Waseda Univ, Grad Sch Sci & Engn, Shinjuku Ku, 41-
304,17 Kikui Cho, Tokyo 1620044, Japan.
EM contact@takanishi.mech.waseda.ac.jp
RI Hashimoto, Kenji/M-6442-2017
OI Hashimoto, Kenji/0000-0003-2300-6766; Zecca,
Massimiliano/0000-0003-4741-4334
FU RoboSoM project from the European FP7 program [248366]; Global COE
Program "Global Robot Academia" from the Ministry of Education, Culture,
Sports, Science and Technology of Japan; SolidWorks Japan K.K.; NiKKi
Fron Co.; Chukoh Chemical Industries; DYDEN Corporation
FX This study was conducted as part of the Research Institute for Science
and Engineering, Waseda University, and as part of the humanoid project
at the Humanoid Robotics Institute, Waseda University. It was also
supported in part by RoboSoM project from the European FP7 program
(Grant agreement No.: 248366), Global COE Program "Global Robot
Academia" from the Ministry of Education, Culture, Sports, Science and
Technology of Japan, SolidWorks Japan K.K., NiKKi Fron Co., Chukoh
Chemical Industries, and DYDEN Corporation, whom we thank for their
financial and technical support.
CR Argyle M., 1998, BODILY COMMUNICATION
Argyle M., 1980, PERSON PERSON WAYS C
Aviezer H, 2008, PSYCHOL SCI, V19, P724, DOI 10.1111/j.1467-9280.2008.02148.x
Bates J., 1994, COMMUN ACM SPECIAL I
Beck A., 2010, P 3 INT WORKSH AFF I
Beira R, 2006, IEEE INT CONF ROBOT, P94, DOI 10.1109/ROBOT.2006.1641167
Bickmore T.W., 2003, THESIS MIT
Bimler D. L., 2009, SPAN J PSYCHOL, V9, P19
Birdwhistell R., 1970, KINESICS AND CONTEXT
Breazeal C, 2003, INT J HUM-COMPUT ST, V59, P119, DOI 10.1016/S1071-
5819(03)00018-1
Cassell J., 1994, P SIGGRAPH
CASSELL J, 2002, P IM 02 MONT CARL
de Gelder B, 2011, HDB FACE PERCEPTION, P535
Dolan RJ, 2001, P NATL ACAD SCI USA, V98, P10006, DOI 10.1073/pnas.171288598
Douglas-Cowie E., HUMAINE PROJECT MIDT
Duck S, 1998, HUMAN RELATIONSHIPS
Ekman P, 2002, FACIAL ACTION CODING
Endo N., 2011, J ROBOT MECHATRONICS, V23
Fussell S. R., 2002, VERBAL COMMUNICATION
Hager J. C., 1983, J SOCIAL PSYCHOPHYSI
Harper Robert G., 1978, NONVERBAL COMMUNICAT
ITOH K, 2006, CISM COUR L, P255
Kishi T., 2012, P 19 CISMIFTOMM S RO
Knapp Mark, 1980, ESSENTIALS NONVERBAL
KNUDSEN HR, 1983, J NONVERBAL BEHAV, V7, P202, DOI 10.1007/BF00986266
Kobayashi H., 2002, IEEE INT S MICR HUM
Marcos S, 2011, IEEE INT CONF ROBOT, P1199
Marcos S, 2010, INTERACT COMPUT, V22, P176, DOI 10.1016/j.intcom.2009.12.002
Massaro DW, 1996, PSYCHON B REV, V3, P215, DOI 10.3758/BF03212421
MEHRABIA.A, 1969, J CONSULT CLIN PSYCH, V33, P330, DOI 10.1037/h0027576
MEHRABIAN A, 1967, J PERS SOC PSYCHOL, V6, P109, DOI 10.1037/h0024532
Mehrabian A., 1971, SILENT MESSAGES
Miwa H., 2002, IEEE RSJ INT C INT R, P2443
Ogura Y, 2006, IEEE INT CONF ROBOT, P76
Oh J. H., 2006, P IEEE RSJ INT C INT
PATTERSON ML, 1986, J NONVERBAL BEHAV, V10, P41, DOI 10.1007/BF00987204
Pelachaud C, 2002, J VISUAL COMP ANIMAT, V13, P301, DOI 10.1002/vis.299
Plutchik R, 2002, EMOTIONS LIFE PERSPE
Poggi I., 2000, EMBODIED CONVERSATIO
Poggi I, 2006, PAROLE CORPO
Poggi I., 1999, CSNLP 8 COGNITIVE SC, P67
Poggi I, 2001, MULTIMODALITY HUMAN
POGGI I, 2002, LANGUAGE VISION MUSI, P133
Poutois G., 2005, CORTEX, V41, P49
Raudys S, 1998, PATTERN RECOGN LETT, V19, P385, DOI 10.1016/S0167-8655(98)00016-
6
Ribeiro T., 2012, P 7 ANN ACM IEEE INT
Saldien J, 2010, INT J SOC ROBOT, V2, P377, DOI 10.1007/s12369-010-0067-6
Smith C. A., 1997, PSYCHOL FACIAL EXPRE, P229, DOI
[10.1017/CBO9780511659911.012, DOI 10.1017/CBO9780511659911.012]
Stoiber N, 2010, COMPUT ANIMAT VIRT W, V21, P39, DOI 10.1002/cav.331
Trovato G., 2012, INT C SOC ROB CHENGD
Trovato G., 2012, IEEE RAS INT C HUM R
Vinayagamoorthy Vinoba, 2006, EUR C STAT ART REP
Weitz S., 1974, NONVERBAL COMMUNICAT
NR 53
TC 2
Z9 2
U1 0
U2 6
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2013
VL 10
IS 1
AR 1350013
DI 10.1142/S0219843613500138
PG 23
WC Robotics
SC Robotics
GA 122IK
UT WOS:000317311600009
DA 2018-01-22
ER

PT J
AU Ha, I
Tamura, Y
Asama, H
AF Ha, Inyong
Tamura, Yusuke
Asama, Hajime
TI Development of open platform humanoid robot DARwIn-OP
SO ADVANCED ROBOTICS
LA English
DT Article
DE open platform; humanoid; DARwIn-OP
AB To develop an appropriate research platform, this paper presents the design
method for a humanoid which has a network-based modular structure and a standard PC
architecture. Based on the proposed method, we developed DARwIn-OP which meets the
requirements for an open humanoid platform. DARwIn-OP has an expandable system
structure, high performance, simple maintenance, familiar development environment
and affordable prices. Not only hardware but also software aspects of open humanoid
platform are discussed in this paper.
C1 [Ha, Inyong; Tamura, Yusuke; Asama, Hajime] Univ Tokyo, Dept Precis Engn, Bunkyo
Ku, Tokyo 1138656, Japan.
RP Ha, I (reprint author), Univ Tokyo, Dept Precis Engn, Bunkyo Ku, 7-3-1 Hongo,
Tokyo 1138656, Japan.
EM hainyong@robot.t.u-tokyo.ac.jp
OI Tamura, Yusuke/0000-0003-2981-7578
CR Cyberbotics Ltd, WEB ROB SIM OV
Fusitsu Laboratories, HIST ROB FUS LAB
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
HIROSE M, 2001, P ADV SCI I 2001, P1
Ishida T, 2003, RO-MAN 2003: 12TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, PROCEEDINGS, P297
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Kaneko K, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT
SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P2471, DOI 10.1109/IROS.2008.4650604
Kaneko K, 2011, IEEE INT C INT ROBOT, P4400, DOI 10.1109/IROS.2011.6048074
Kato I, 1973, BIOMECHANICS, P173
KUROKI Y, 2001, P 2001 INT MICR HUM, P3, DOI DOI 10.1109/MHS.2001.965213
ROBOTIS Co. Ltd, ROBOTIS ROB ACT DYN
Sugano S., 1987, Proceedings of the 1987 IEEE International Conference on
Robotics and Automation (Cat. No.87CH2413-3), P90
NR 12
TC 8
Z9 8
U1 0
U2 4
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD FEB 1
PY 2013
VL 27
IS 3
BP 223
EP 232
DI 10.1080/01691864.2012.754079
PG 10
WC Robotics
SC Robotics
GA 278ZK
UT WOS:000328928200006
DA 2018-01-22
ER

PT J
AU Ijspeert, AJ
Nakanishi, J
Hoffmann, H
Pastor, P
Schaal, S
AF Ijspeert, Auke Jan
Nakanishi, Jun
Hoffmann, Heiko
Pastor, Peter
Schaal, Stefan
TI Dynamical Movement Primitives: Learning Attractor Models for Motor
Behaviors
SO NEURAL COMPUTATION
LA English
DT Article
ID MULTIJOINT MOVEMENT; OBSTACLE AVOIDANCE; PATTERN GENERATION; HUMANOID
ROBOTS; ARM STIFFNESS; COORDINATION; SYSTEMS; OSCILLATORS; LOCOMOTION;
IMITATION
AB Nonlinear dynamical systems have been used in many disciplines to model complex
behaviors, including biological motor control, robotics, perception, economics,
traffic prediction, and neuroscience. While often the unexpected emergent behavior
of nonlinear systems is the focus of investigations, it is of equal importance to
create goal-directed behavior (e.g., stable locomotion from a system of coupled
oscillators under perceptual guidance). Modeling goal-directed behavior with
nonlinear systems is, however, rather difficult due to the parameter sensitivity of
these systems, their complex phase transitions in response to subtle parameter
changes, and the difficulty of analyzing and predicting their long-term behavior;
intuition and time-consuming parameter tuning play a major role. This letter
presents and reviews dynamical movement primitives, a line of research for modeling
attractor behaviors of autonomous nonlinear dynamical systems with the help of
statistical learning techniques. The essence of our approach is to start with a
simple dynamical system, such as a set of linear differential equations, and
transform those into a weakly nonlinear system with prescribed attractor dynamics
by means of a learnable autonomous forcing term. Both point attractors and limit
cycle attractors of almost arbitrary complexity can be generated. We explain the
design principle of our approach and evaluate its properties in several example
applications in motor control and robotics.
C1 [Ijspeert, Auke Jan] Ecole Polytech Fed Lausanne, CH-1015 Lausanne, Switzerland.
[Nakanishi, Jun] Univ Edinburgh, Sch Informat, Edinburgh EH8 9AB, Midlothian,
Scotland.
[Hoffmann, Heiko; Pastor, Peter; Schaal, Stefan] Univ So Calif, Los Angeles, CA
90089 USA.
[Schaal, Stefan] Max Planck Inst Intelligent Syst, D-72076 Tubingen, Germany.
[Schaal, Stefan] ATR Computat Neurosci Labs, Kyoto 6190288, Japan.
RP Ijspeert, AJ (reprint author), Ecole Polytech Fed Lausanne, CH-1015 Lausanne,
Switzerland.
EM auke.ijspeert@epfl.ch; jun.nakanishi@ed.ac.uk; heikohof@usc.edu;
pastorsa@usc.edu; sschaal@usc.edu
OI Hoffmann, Heiko/0000-0003-3817-5626
FU European Commission [AMARSI FP7-ICT-248311]; National Science Foundation
[ECS-0326095, IIS-0535282, CNS-0619937, IIS-0917318, CBET-0922784,
EECS-0926052]; DARPA program on Learning Locomotion; Okawa Foundation;
ATR Computational Neuroscience Laboratories; German Research Foundation
(DFG) [HO-3771-1]
FX This research was supported in part by the European Commission grant
AMARSI FP7-ICT-248311 and by National Science Foundation grants
ECS-0326095, IIS-0535282, CNS-0619937, IIS-0917318, CBET-0922784,
EECS-0926052, the DARPA program on Learning Locomotion, the Okawa
Foundation, and the ATR Computational Neuroscience Laboratories. H.H.
was funded by a grant of the German Research Foundation (DFG) HO-3771-1.
CR Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
Bernstein N., 1967, CONTROL REGULATION M
BILLARD A, 2001, ROBOTICS AUTONOMOUS, V941, P1
Billard A., 2008, HDB ROBOTICS
Bishop C. M., 2006, PATTERN RECOGNITION
Buchli J, 2006, BIOL CYBERN, V95, P645, DOI 10.1007/s00422-006-0128-y
Buhler M., 1990, P IEEE INT C ROB AUT, P845
BULLOCK D, 1989, VOLITIONAL ACTION, P253
Burridge RR, 1999, INT J ROBOT RES, V18, P534, DOI 10.1177/02783649922066385
Chevallereau C, 2005, INT J ROBOT RES, V24, P431, DOI 10.1177/0278364905054929
Dayan P, 2001, THEORETICAL NEUROSCI
DIJKSTRA TMH, 1994, BIOL CYBERN, V71, P489, DOI 10.1007/s004220050108
Fajen BR, 2003, J EXP PSYCHOL HUMAN, V29, P343, DOI 10.1037/0096-1523.29.2.343
FLASH T, 1985, J NEUROSCI, V5, P1688
Flash T, 2001, CURR OPIN NEUROBIOL, V11, P655, DOI 10.1016/S0959-4388(01)00265-3
Friedland Bernard, 1986, CONTROL SYSTEM DESIG
Gams A, 2009, AUTON ROBOT, V27, P3, DOI 10.1007/s10514-009-9118-y
Getting P.A., 1988, NEURAL CONTROL RHYTH, P101
GISZTER SF, 1993, J NEUROSCI, V13, P467
Gomi H, 1997, BIOL CYBERN, V76, P163, DOI 10.1007/s004220050329
Gomi H, 1996, SCIENCE, V272, P117, DOI 10.1126/science.272.5258.117
Gribovskaya E, 2011, INT J ROBOT RES, V30, P80, DOI 10.1177/0278364910376251
Grillner S., 1981, HDB PHYSL 1, VI, P1179
Guckenheimer J., 1983, NONLINEAR OSCILLATOR
HAKEN H, 1990, NEURAL NETWORKS, V3, P395, DOI 10.1016/0893-6080(90)90022-D
Hatsopoulos NG, 1996, J MOTOR BEHAV, V28, P3, DOI 10.1080/00222895.1996.9941728
Hoffmann H., 2009, ICRA, P2587, DOI DOI 10.1109/R0B0T.2009.5152423
HOLLERBACH JM, 1984, T ASME, V106, P139
Ijspeert A. J., 2003, ADV NEURAL INFORM PR, V15, P1547
Ijspeert AJ, 2001, BIOL CYBERN, V84, P331, DOI 10.1007/s004220000211
Ijspeert AJ, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P958, DOI 10.1109/IRDS.2002.1041514
Ijspeert AJ, 1999, ADAPT BEHAV, V7, P151, DOI 10.1177/105971239900700202
Ijspeert A. J., 2002, IEEE INT C ROB AUT, P1398
Ijspeert AJ, 2008, NEURAL NETWORKS, V21, P642, DOI 10.1016/j.neunet.2008.03.014
Jackson E. A., 1989, PERSPECTIVES NONLINE
Jaeger H, 2004, SCIENCE, V304, P78, DOI 10.1126/science.1091277
Joshi P, 2005, NEURAL COMPUT, V17, P1715, DOI 10.1162/0899766054026684
Kawato M, 1996, ADV MOTOR LEARNING C, P225
KELSO JAS, 1988, PHYS LETT A, V134, P8, DOI 10.1016/0375-9601(88)90537-3
Kelso JAS, 1995, DYNAMIC PATTERNS SEL
Khansari-Zadeh SM, 2010, IEEE INT C INT ROBOT, P2676, DOI
10.1109/IROS.2010.5651259
KHATIB O, 1986, INT J ROBOT RES, V5, P90, DOI 10.1177/027836498600500106
Klavins E, 2001, IEEE INT CONF ROBOT, P4200, DOI 10.1109/ROBOT.2001.933274
Kober Jens, 2009, P IEEE INT C ROB AUT, P2112
Koditschek D. E., 1987, P IEEE INT C ROB AUT, P211
Kugler P. N., 1987, INFORM NATURAL LAW S
Kulvicius T, 2012, IEEE T ROBOT, V28, P145, DOI 10.1109/TRO.2011.2163863
Latash ML, 1993, CONTROL HUMAN MOVEME
Li PY, 1999, IEEE T ROBOTIC AUTOM, V15, P751, DOI 10.1109/70.782030
Lohmiller W, 1998, AUTOMATICA, V34, P683, DOI 10.1016/S0005-1098(98)00019-3
Maass W, 2002, NEURAL COMPUT, V14, P2531, DOI 10.1162/089976602760407955
MATTHEWS PC, 1991, PHYSICA D, V52, P293, DOI 10.1016/0167-2789(91)90129-W
McCrea DA, 2008, BRAIN RES REV, V57, P134, DOI 10.1016/j.brainresrev.2007.08.006
Miyamoto H, 1996, NEURAL NETWORKS, V9, P1281, DOI 10.1016/S0893-6080(96)00043-3
Mussa-Ivaldi FA, 1999, CURR OPIN NEUROBIOL, V9, P713, DOI 10.1016/S0959-
4388(99)00029-X
MussaIvaldi FA, 1997, 1997 IEEE INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL
INTELLIGENCE IN ROBOTICS AND AUTOMATION - CIRA '97, PROCEEDINGS, P84, DOI
10.1109/CIRA.1997.613842
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Okada M, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1410, DOI 10.1109/ROBOT.2002.1014741
Paine RW, 2004, NEURAL NETWORKS, V17, P1291, DOI 10.1016/j.neunet.2004.08.005
Pastor P., 2009, IEEE INT C ROB AUT, P763
Perk B. E., 2006, ARXIVCS0609140V2CSRO
Peters J, 2008, NEURAL NETWORKS, V21, P682, DOI 10.1016/j.neunet.2008.02.003
Pongas D., 2005, P IEEE INT C INT ROB, P2911
Righetti L, 2006, PHYSICA D, V216, P269, DOI [10.1016/j.physd.2006.02.009,
10.1016/j.phvsd.2006.02.009]
RIMON E, 1992, IEEE T ROBOTIC AUTOM, V8, P501, DOI 10.1109/70.163777
RIZZI AA, 1994, IEEE INT CONF ROBOT, P2935, DOI 10.1109/ROBOT.1994.350893
Rizzolatti G, 1998, TRENDS NEUROSCI, V21, P188, DOI 10.1016/S0166-2236(98)01260-
0
SAKOE H, 1978, IEEE T ACOUST SPEECH, V26, P43, DOI 10.1109/TASSP.1978.1163055
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Schaal S, 2004, NAT NEUROSCI, V7, P1137, DOI 10.1038/nn1322
Schaal S, 1998, NEURAL COMPUT, V10, P2047, DOI 10.1162/089976698300016963
Schaal S, 2003, PHILOS T R SOC B, V358, P537, DOI 10.1098/rstb.2002.1258
SCHAAL S, 1998, P 3 INT C COMP INT N, P48
SCHAAL S, 1994, ADV NEURAL INFORMATI, V6, P160
Schaal S, 2007, PROG BRAIN RES, V165, P425, DOI 10.1016/S0079-6123(06)65027-9
SCHONER G, 1990, BIOL CYBERN, V63, P257, DOI 10.1007/BF00203449
SCHONER G, 1988, SCIENCE, V239, P1513, DOI 10.1126/science.3281253
SchOner G., 2001, P 9 INT S INT ROB SY
Sciavicco L., 2000, MODELLING CONTROL RO
Scott A., 2005, ENCY NONLINEAR SCI
Slotine J.J.E., 1991, APPL NONLINEAR CONTR
Sternad D, 1996, J MOTOR BEHAV, V28, P255, DOI 10.1080/00222895.1996.9941750
Strogatz S.H., 1994, NONLINEAR DYNAMICS C
Swinnen SP, 2004, J MOTOR BEHAV, V36, P394, DOI 10.1080/00222895.2004.11008005
TAGA G, 1991, BIOL CYBERN, V65, P147, DOI 10.1007/BF00198086
Thelen E., 1994, DYNAMICAL SYSTEMS AP
Theodorou E., 2010, J MACHINE LEARNING R, P3137
Todorov E, 2004, NAT NEUROSCI, V7, P907, DOI 10.1038/nn1309
Tsuji T, 2002, IEEE T SYST MAN CY C, V32, P426, DOI 10.1109/TSMCC.2002.807273
TURVEY MT, 1990, AM PSYCHOL, V45, P938, DOI 10.1037/0003-066X.45.8.938
Ude A, 2010, IEEE T ROBOT, V26, P800, DOI 10.1109/TRO.2010.2065430
Wada Y, 2004, NEURAL NETWORKS, V17, P353, DOI 10.1016/j.neunet.2003.11.009
Wolpert DM, 1997, TRENDS COGN SCI, V1, P209, DOI 10.1016/S1364-6613(97)01070-X
Wyffels F, 2009, AT-EQUAL 2009: 2009 ECSIS SYMPOSIUM ON ADVANCED TECHNOLOGIES
FOR ENHANCED QUALITY OF LIFE: LAB-RS AND ARTIPED 2009, P118, DOI 10.1109/AT-
EQUAL.2009.32
NR 94
TC 237
Z9 245
U1 11
U2 77
PU MIT PRESS
PI CAMBRIDGE
PA ONE ROGERS ST, CAMBRIDGE, MA 02142-1209 USA
SN 0899-7667
EI 1530-888X
J9 NEURAL COMPUT
JI Neural Comput.
PD FEB
PY 2013
VL 25
IS 2
BP 328
EP 373
DI 10.1162/NECO_a_00393
PG 46
WC Computer Science, Artificial Intelligence; Neurosciences
SC Computer Science; Neurosciences & Neurology
GA 068ZB
UT WOS:000313403600002
PM 23148415
DA 2018-01-22
ER

PT J
AU Nenchev, DN
AF Nenchev, D. N.
TI Reaction Null Space of a multibody system with applications in robotics
SO MECHANICAL SCIENCES
LA English
DT Article
ID REDUNDANCY RESOLUTION; MANIPULATOR SYSTEMS; OPTIMIZATION; VIBRATIONS;
RECOVERY; DESIGN; IMPACT; MOTION; MODEL
AB This paper provides an overview of implementation examples based on the Reaction
Null Space formalism, developed initially to tackle the problem of satellite-base
disturbance of a free-floating space robot, when the robot arm is activated. The
method has been applied throughout the years to other unfixed-base systems, e. g.
flexible-base and macro/mini robot systems, as well as to the balance control
problem of humanoid robots. The paper also includes most recent results about
complete dynamical decoupling of the end-link of a fixed-base robot, wherein the
end-link is regarded as the unfixed-base. This interpretation is shown to be useful
with regard to motion/force control scenarios. Respective implementation results
are provided.
C1 Tokyo City Univ, Grad Sch Engn, Setagaya Ku, Tokyo 1588557, Japan.
RP Nenchev, DN (reprint author), Tokyo City Univ, Grad Sch Engn, Setagaya Ku,
Tamazutsumi 1-28-1, Tokyo 1588557, Japan.
EM nenchev@tcu.ac.jp
CR Abiko S, 2010, ADV ROBOTICS, V24, P1099, DOI 10.1163/016918610X501264
BOOK WJ, 1989, AMER CONTR CONF CONF, P1377
Borst C, 2009, 2009 IEEE INT C ROB, P1597, DOI DOI 10.1109/ROBOT.2009.5152586
Briot S, 2012, MECH MACH THEORY, V57, P1, DOI
10.1016/j.mechmachtheory.2012.06.006
Burdick J., 1989, 1989 IEEE INT C ROB, P264, DOI [10.1109/ROBOT.1989.99999, DOI
10.1109/ROBOT.1989.99999]
Cheng D., 2005, 2005 IEEE RSJ INT C, P29
Cocuzza S, 2010, ACTA ASTRONAUT, V67, P285, DOI 10.1016/j.actaastro.2009.05.007
Coleshill E, 2009, ACTA ASTRONAUT, V64, P869, DOI
10.1016/j.actaastro.2008.11.011
Cong PC, 2010, J AEROSPACE ENG, V23, P117, DOI 10.1061/(ASCE)AS.1943-
5525.0000016
Craig J. J., 2004, INTRO ROBOTICS MECH
Dimitrov D., 2004, 2004 IEEE RSJ INT C, V4, P3333, DOI
[10.1109/IROS.2004.1389931, 2004, DOI 10.1109/IROS.2004.1389931]
Elfizy AT, 2005, INT J MACH TOOL MANU, V45, P153, DOI
10.1016/j.ijmachtools.2004.07.008
Fattah A, 2005, ROBOTICA, V23, P75, DOI 10.1017/S0263574704000670
Featherstone R, 1997, INT J ROBOT RES, V16, P168, DOI 10.1177/027836499701600203
Featherstone R., 2008, RIGID BODY DYNAMICS, DOI [10.1007/978-0-387-74315-8, DOI
10.1007/978-0-387-74315-8]
Finat J., 2002, 11 IEEE INT WORKSH R, P350, DOI [10.1109/ROMAN.2002.1045647, DOI
10.1109/ROMAN.2002.1045647]
Gorce P, 1999, IEEE T SYST MAN CY A, V29, P616, DOI 10.1109/3468.798065
Gouo A, 2000, ADV ROBOTICS, V13, P617
HANSON ML, 1995, J ROBOTIC SYST, V12, P767, DOI 10.1002/rob.4620121106
Hara N., 2010, 10 INT S ART INT ROB, P214
Hara N, 2012, IEEE INT CONF ROBOT, P299, DOI 10.1109/ICRA.2012.6224627
Hyon S., 2009, 2009 IEEE INT C ROB, P1549, DOI DOI 10.1109/ROBOT.2009.5152434
Kaigom E. G., 2011, 11 S ADV SPAC TECHN, P1
Kanamiya Y, 2010, IEEE INT CONF ROBOT, P3446, DOI 10.1109/ROBOT.2010.5509785
KHATIB O, 1987, IEEE T ROBOTIC AUTOM, V3, P43, DOI 10.1109/JRA.1987.1087068
KHATIB O, 1995, INT J ROBOT RES, V14, P19, DOI 10.1177/027836499501400103
Konno A, 2011, INT J ROBOT RES, V30, P1596, DOI 10.1177/0278364911405870
Konstantinov M. S., 1981, 11TH P INT S IND ROB, P561
Lampariello R, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT
ROBOTS AND SYSTEMS, VOLS 1-12, P91, DOI 10.1109/IROS.2006.281777
Lee S. H., 1990, CISM IFTOMM S THEOR, P252
LEW JY, 1995, IEEE INT CONF ROBOT, P3116, DOI 10.1109/ROBOT.1995.525728
LUH JYS, 1980, IEEE T AUTOMAT CONTR, V25, P468, DOI 10.1109/TAC.1980.1102367
Masutani Y., 1989, NASA C SPAC TEL
Morimoto H., 2001, INT C ADV ROB, P187
Murray RM, 1994, MATH INTRO ROBOTIC M
NAKAMURA Y, 1987, INT J ROBOT RES, V6, P3, DOI 10.1177/027836498700600201
NENCHEV D, 1992, IEEE T ROBOTIC AUTOM, V8, P1, DOI 10.1109/70.127234
Nenchev D, 1996, P 35 IEEE C DEC CONT, V4, P4118, DOI DOI
10.1109/CDC.1996.577417
NENCHEV DN, 1989, J ROBOTIC SYST, V6, P769, DOI 10.1002/rob.4620060607
Nenchev DN, 1999, IEEE T ROBOTIC AUTOM, V15, P1011, DOI 10.1109/70.817666
Nenchev D., 1988, IEEE INT WORKSH INT, P679, DOI DOI 10.1109/IROS.1988.593682
Nenchev DN, 2008, ROBOTICA, V26, P643, DOI 10.1017/S0263574708004268
Nguyen-Huynh T.-C., 2011, 2011 IEEE INT C ROB, P4202, DOI DOI
10.1109/ICRA.2011.5980398
Oda M., 2000, 2000 IEEE IN TC ROB, V1, P914, DOI [10.1109/ROBOT.2000.844165, DOI
10.1109/ROBOT.2000.844165]
Osumi H, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-12, P73, DOI 10.1109/IROS.2006.281669
Ott C, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-12, P4502, DOI 10.1109/IROS.2006.282539
PAPADOPOULOS E, 1991, 1991 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-3, P1696, DOI 10.1109/ROBOT.1991.131864
Parsa K, 2005, J DYN SYST-T ASME, V127, P688, DOI 10.1115/1.1870039
Piersigilli P, 2010, ACTA ASTRONAUT, V66, P183, DOI
10.1016/j.actaastro.2009.05.015
Sato F, 2011, IEEE INT C INT ROBOT, P3179, DOI 10.1109/IROS.2011.6048182
Sato N., 2001, 6 INT S ART INT ROB, P1
Sentis L, 2010, IEEE T ROBOT, V26, P483, DOI 10.1109/TRO.2010.2043757
Sharf I, 1996, J DYN SYST-T ASME, V118, P704, DOI 10.1115/1.2802346
Shui HT, 2009, 2009 INTERNATIONAL CONFERENCE ON MEASURING TECHNOLOGY AND
MECHATRONICS AUTOMATION, VOL II, P845, DOI 10.1109/ICMTMA.2009.556
SHUMWAY-COOK A, 1989, Seminars in Hearing, V10, P196
Stephens B, 2007, IEEE-RAS INT C HUMAN, P589, DOI 10.1109/ICHR.2007.4813931
Tamegaya K., 2008, HUM 2008 8 IEEE RAS, P151, DOI [10.1109/ICHR.2008.4755960,
DOI 10.1109/ICHR.2008.4755960]
TORRES MA, 1992, J GUID CONTROL DYNAM, V15, P1010, DOI 10.2514/3.20936
Torres MA, 1996, IEEE INT CONF ROBOT, P2498, DOI 10.1109/ROBOT.1996.506538
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Wimbock T, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P5481, DOI 10.1109/IROS.2009.5354528
Xu WF, 2008, J INTELL ROBOT SYST, V51, P303, DOI 10.1007/s10846-007-9192-3
Yoshida K, 1997, ADV ROBOTICS, V11, P397
Yoshida K., 2000, AIAA GUID NAV CONTR, P1297
Yoshida Y., 2011, 2011 IEEE International Conference on Robotics and Biomimetics
(ROBIO), P1825, DOI 10.1109/ROBIO.2011.6181555
Yoshikawa T., 1993, 1993 IEEE INT C ROB, P210, DOI [10.1109/ROBOT.1993.291848,
DOI 10.1109/ROBOT.1993.291848]
NR 66
TC 12
Z9 14
U1 0
U2 10
PU COPERNICUS GESELLSCHAFT MBH
PI GOTTINGEN
PA BAHNHOFSALLEE 1E, GOTTINGEN, 37081, GERMANY
SN 2191-9151
EI 2191-916X
J9 MECH SCI
JI Mech. Sci.
PY 2013
VL 4
IS 1
BP 97
EP 112
DI 10.5194/ms-4-97-2013
PG 16
WC Engineering, Mechanical
SC Engineering
GA AK3NY
UT WOS:000338332600007
OA gold
DA 2018-01-22
ER

PT J
AU Kim, UH
Okuno, HG
AF Kim, Ui-Hyun
Okuno, Hiroshi G.
TI Improved binaural sound localization and tracking for unknown
time-varying number of speakers
SO ADVANCED ROBOTICS
LA English
DT Article
DE Human-robot interaction; binaural sound localization; multisource sound
tracking
ID ROBOT AUDITION; DELAY
AB A method based on the generalized cross-correlation (GCC) method weighted by the
phase transform (PHAT) has been developed for binaural sound source localization
(SSL) and tracking of multiple sound sources. Accurate binaural audition is
important for applying inexpensive and widely applicable auditory capabilities to
robots and systems. Conventional SSL based on the GCC-PHAT method is degraded by
low resolution of the time difference of arrival estimation, by the interference
created when the sound waves arrive at a microphone from two directions around the
robot head, and by impaired performance when there are multiple speakers. The low-
resolution problem is solved by using a maximum-likelihood-based SSL method in the
frequency domain. The multipath interference problem is avoided by incorporating a
new time delay factor into the GCC-PHAT method with assuming a spherical robot
head. The performance when there are multiple speakers was improved by using a
multisource speech tracking method consisting of voice activity detection (VAD) and
K-means clustering. The standard K-means clustering algorithm was extended to
enable tracking of an unknown time-varying number of speakers by adding two
additional steps that increase the number of clusters automatically and eliminate
clusters containing incorrect direction estimations. Experiments conducted on the
SIG-2 humanoid robot show that this method outperforms the conventional SSL method;
it reduces localization errors by 18.1 degrees on average and by over 37 degrees in
the side directions. It also tracks multiple speakers in real time with tracking
errors below 4.35 degrees.
C1 [Kim, Ui-Hyun; Okuno, Hiroshi G.] Kyoto Univ, Grad Sch Informat, Dept
Intelligence Sci & Technol, Sakyo Ku, Kyoto 6068501, Japan.
RP Kim, UH (reprint author), Kyoto Univ, Grad Sch Informat, Dept Intelligence Sci &
Technol, Sakyo Ku, Yoshida Honmachi, Kyoto 6068501, Japan.
EM euihyun@kuis.kyoto-u.ac.jp
OI Okuno, Hiroshi/0000-0002-8704-4318
FU Japan Society for the Promotion of Science (JSPS) [24220006]; JST
Japan-France Cooperative Research Project BINAAHR
FX This research was partially supported by a Grant-in-Aid for Scientific
Research (KAKENHI No. 24220006) from the Japan Society for the Promotion
of Science (JSPS) and the JST Japan-France Cooperative Research Project
BINAAHR ('Binaural Active Audition for Humanoid Robots'). The authors
would like to thank Professor Kazuhiro Nakadai of the HONDA Research
Institute and Takuma Otsuka of the Department of Intelligence Science
and Technology in the Graduate School of Informatics at Kyoto University
for their constructive comments.
CR Blauert J, 2011, P IEEE INT C DIG SIG, P1
Blauert J., 1997, SPATIAL HEARING PSYC
Cheng CI, 2001, AUDIO ENG SOC AES, V49, P231
Kim UH, 2011, IEEE INT C INT ROBOT, P2910, DOI 10.1109/IROS.2011.6048364
Kim UH, 2008, 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, VOLS 1 AND 2, P610, DOI 10.1109/ROMAN.2008.4600734
KNAPP CH, 1976, IEEE T ACOUST SPEECH, V24, P320, DOI 10.1109/TASSP.1976.1162830
LI XF, 2011, P IEEE RSJ INT C INT, P2879
Nakadai K, 2004, SPEECH COMMUN, V44, P97, DOI 10.1016/j.specom.2004.10.010
Nakadai K, 2010, ADV ROBOTICS, V24, P739, DOI 10.1163/016918610X493561
OMOLOGO M, 1994, INT CONF ACOUST SPEE, P273
RUI Y, 2004, ACOUST SPEECH SIG PR, P133
SASAKI Y, 2012, P IEEE RSJ INT C INT, P713
SCHMIDT RO, 1986, IEEE T ANTENN PROPAG, V34, P276, DOI 10.1109/TAP.1986.1143830
Sohn J, 1999, IEEE SIGNAL PROC LET, V6, P1, DOI 10.1109/97.736233
Valin JM, 2003, P IEEE RSJ INT C INT, P1128
Viste H., 2004, P 7 INT C DIG AUD EF, P145
Youssef K, 2012, INT CONF ACOUST SPEE, P217, DOI 10.1109/ICASSP.2012.6287856
NR 17
TC 6
Z9 6
U1 0
U2 4
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2013
VL 27
IS 15
BP 1161
EP 1173
DI 10.1080/01691864.2013.812177
PG 13
WC Robotics
SC Robotics
GA AH6PH
UT WOS:000336252500003
DA 2018-01-22
ER

PT J
AU Kamide, H
Kawabe, K
Shigemi, S
Arai, T
AF Kamide, Hiroko
Kawabe, Koji
Shigemi, Satoshi
Arai, Tatsuo
TI Development of a psychological scale for general impressions of humanoid
SO ADVANCED ROBOTICS
LA English
DT Article
DE general impressions; psychological scale; humanoid
ID HUMAN-ROBOT INTERACTION
AB This study identifies the basic perspectives that ordinary people use to
evaluate humanoids (Study 1) and develops a new psychological scale to quantify
general impressions toward humanoids based on these basic perspectives (Study 2).
Then we investigate the effect of the attributes of humans (sex and age) on
evaluations of humanoid using the scale (Study 3). Study 1 used 11 humanoids and
collected data from 919 Japanese people. People described their natural impressions
about the robots in free text. Descriptions are categorized into several groups and
the items of the scale are made based on the descriptions. In Study 2, 2624
Japanese evaluate 11 humanoids on the developed scale. Factor analysis showed that
nine factors should be used for evaluating the general impressions regarding
humanoids: Familiarity, Repulsion, Performance, Utility, Motion, Sound, Voice,
Humanness, and Agency. The factor structure is clear and its reliability as a
psychological scale is satisfactorily high. Study 3 reveals that effect of sex and
age relate to evaluations of humanoid such as middle-aged and older females tend to
rate Familiarity and Humanness of all humanoids higher. We discuss usability of the
scale.
C1 [Kamide, Hiroko; Arai, Tatsuo] Osaka Univ, Grad Sch Engn Sci, Toyonaka, Osaka
5608531, Japan.
[Kawabe, Koji; Shigemi, Satoshi] Honda Res & Dev Co Ltd, Wako, Saitama 3510188,
Japan.
RP Kamide, H (reprint author), Osaka Univ, Grad Sch Engn Sci, 1-3 Machikaneyamacho,
Toyonaka, Osaka 5608531, Japan.
EM kamide@arai-lab.sys.es.osaka-u.ac.jp
FU Global COE Program 'Center of Human-Friendly Robotics Based on Cognitive
Neuroscience' of the Ministry of Education, Culture, Sports, Science and
Technology, Japan
FX This work has been done in collaboration with Honda R&D Co., Ltd. This
research was supported by Global COE Program 'Center of Human-Friendly
Robotics Based on Cognitive Neuroscience' of the Ministry of Education,
Culture, Sports, Science and Technology, Japan. We would like to express
our appreciation to researchers who cooperated to give us movies of
robots and check them, Satoshi Koizumi (Robovie), Yutaka Nakamura, Ryota
Hiura (wakamaru), Kazuhito Yokoi (HRP-2, HRP-4C), Minoru Asada, Hiroshi
Ishiguro, Takashi Minato, and Yuichiro Yoshikawa (Neony, CB2, Repliee
Q2, Geminoid, Geminoid F).
CR Bartneck C, 2009, INT J SOC ROBOT, V1, P71, DOI 10.1007/s12369-008-0001-3
Chikaraishi T, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P326, DOI
10.1109/IROS.2008.4650899
Inoue K., 2007, P 13 INT C ADV ROB K, P1005
Kamide Hiroko, 2010, P 2010 IEEE RSJ INT
Kamide H., 2011, P 2011 IEEE INT C RO, P599
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kanda T, 2001, IEEE INT CONF ROBOT, P4166, DOI 10.1109/ROBOT.2001.933269
Kanda T, 2008, IEEE T ROBOT, V24, P725, DOI 10.1109/TRO.2008.921566
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Kaneko K, 2009, P IEEE RAS INT C HUM, P7, DOI DOI 10.1109/ICHR.2009.5379537
Kawauchi Naoto, 2003, MITSUBISHI JUKO GIHO, V40, P270
Koizumi S., 2006, P IEEE INT WORKSH RO
Levy LH, 1960, J ABNORM SOC PSYCHOL, P21
Likert R., 1932, ARCH PSYCHOLOGY, V140, P44
LORR M, 1965, J CONSULT PSYCHOL, V29, P146, DOI 10.1037/h0021924
Minato T., 2009, P WORKSH SYN INT APP
Minato T, 2007, IEEE-RAS INT C HUMAN, P557, DOI 10.1109/ICHR.2007.4813926
Nakaoka S., 2009, P IEEE RAS INT C HUM
Negi S, 2008, 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, VOLS 1 AND 2, P719, DOI 10.1109/ROMAN.2008.4600752
Nomura T., 2006, P 15 IEEE INT S ROB, P372
Ogawa K., 2009, P 18 INT S ROB HUM I, P553
ROSENBER.S, 1968, J PERS SOC PSYCHOL, V9, P283, DOI 10.1037/h0026086
RUBIN Z, 1970, J PERS SOC PSYCHOL, V16, P265, DOI 10.1037/h0029841
Shigemi S., HONDA TECHNICAL REV, V18, P38
Ujiie Y., 2006, P 3 INT S SYST HUM S
WEIL MM, 1995, COMPUT HUM BEHAV, V11, P95, DOI 10.1016/0747-5632(94)00026-E
NR 26
TC 5
Z9 5
U1 0
U2 9
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
J9 ADV ROBOTICS
JI Adv. Robot.
PD JAN 1
PY 2013
VL 27
IS 1
SI SI
BP 3
EP 17
DI 10.1080/01691864.2013.751159
PG 15
WC Robotics
SC Robotics
GA 100BZ
UT WOS:000315671600002
DA 2018-01-22
ER

PT J
AU Fukui, K
Yasui, T
Gomi, H
Sugiya, H
Fujimori, O
Meyer, W
Tsukise, A
AF Fukui, Kousuke
Yasui, Tadashi
Gomi, Hiroshi
Sugiya, Hiroshi
Fujimori, Osamu
Meyer, Wilfried
Tsukise, Azuma
TI Cytochemistry of Sialoglycoconjugates, Lysozyme, and beta-Defensin in
Eccrine Glands of Porcine Snout Skin as Studied by Electron Microscopy
SO MICROSCOPY RESEARCH AND TECHNIQUE
LA English
DT Article
DE pig snout skin; eccrine glands; sialic acids; antimicrobial substances;
cytochemistry
ID SWEAT GLANDS; FOOT PADS; ANTIMICROBIAL PEPTIDES; EXPRESSION ANALYSIS;
DIGITAL PADS; HISTOCHEMISTRY; IMMUNITY; CELLS; SECRETION; SECTIONS
AB In most mammals except for humanoid primates, eccrine glands are confined to the
skin of a series of specific body regions. Sialic acids and antimicrobial
substances exhibit various functional properties and serve as a component of
nonspecific defense against micro-organisms, respectively. In this study, the
distribution of these moieties was studied by electron microscopic histochemical
methods. The eccrine glandular acini consisted of two types of dark cells as well
as clear cells. The secretory granules and Golgi apparatus of both types of dark
cells contained sialic acid residues linked to alpha 2-6Gal/GalNAc. On the other
hand, sialoglycoconjugates with Sia alpha 2-3Gal beta 1-4GlcNAc sequence were
confined to those of the Type II dark cells. In addition, lysozyme and beta-
defensin were mainly detected in the secretory granules of the Type II dark cells.
These secretory products may create a defensive barrier against microbial invasion
and play an essential role in preservation of the integrity of porcine snout skin
as a sensory organ. Microsc. Res. Tech. 76: 12-19, 2013. (C) 2012 Wiley
Periodicals, Inc.
C1 [Fukui, Kousuke; Yasui, Tadashi; Gomi, Hiroshi; Tsukise, Azuma] Nihon Univ, Dept
Vet Anat, Coll Bioresource Sci, Fujisawa, Kanagawa 2520880, Japan.
[Sugiya, Hiroshi] Nihon Univ, Vet Biochem Lab, Coll Bioresource Sci, Fujisawa,
Kanagawa 2520880, Japan.
[Fujimori, Osamu] Nagoya Gakuin Univ, Fac Rehabil Sci, Lab Anat & Histochem,
Seto, Aichi 4801298, Japan.
[Meyer, Wilfried] Univ Vet Med, Inst Anat, Hannover Fdn, D-30173 Hannover,
Germany.
RP Tsukise, A (reprint author), Nihon Univ, Dept Vet Anat, Coll Bioresource Sci,
1866 Kameino, Fujisawa, Kanagawa 2520880, Japan.
EM tsukise@brs.nihon-u.ac.jp
CR Bos JD, 2001, CLIN DERMATOL, V19, P563, DOI 10.1016/S0738-081X(00)00174-7
CALHOUN ML, 1981, TXB VET HISTOLOGY, P378
Casselman WGB, 1959, HISTOCHEMICAL TECHNI
DANGUY A, 1995, EUR J HISTOCHEM, V39, P5
Duszyk M, 2001, PFLUG ARCH EUR J PHY, V443, pS45
Ellis RA, 1968, HDB HAUT GESCHLECHTS, P224
Fukui K, 2012, Eur J Histochem, V56, pe6, DOI 10.4081/ejh.2012.e6
Ganz T, 2003, NAT REV IMMUNOL, V3, P710, DOI 10.1038/nri1180
Ganz T, 2004, CR BIOL, V327, P539, DOI 10.1016/j.crvi.2003.12.007
Halata Z., 1975, Advances Anat Embryol Cell Biol, V50, P1
Hashimoto K, 1986, BIOL INTEGUMENT, P339
Illana M, 1997, EUR J HISTOCHEM, V41, P41
JOLLES P, 1984, MOL CELL BIOCHEM, V63, P165
KLENHA J, 1967, J INVEST DERMATOL, V49, P396, DOI 10.1038/jid.1967.155
KUROSUMI K, 1984, INT REV CYTOL, V87, P253, DOI 10.1016/S0074-7696(08)62445-6
LUFT JH, 1961, J BIOPHYS BIOCHEM CY, V9, P409, DOI 10.1083/jcb.9.2.409
MEYER W, 1995, ANN ANAT, V177, P39, DOI 10.1016/S0940-9602(11)80129-9
MEYER W, 1989, BASIC APPL HISTOCHEM, V33, P219
MEYER W, 1982, ZBL VET MED C, V11, P283
Miyazaki T, 1998, ARCH HISTOL CYTOL, V61, P199, DOI 10.1679/aohc.61.199
Miyazaki T, 2001, ARCH HISTOL CYTOL, V64, P305, DOI 10.1679/aohc.64.305
NEWMAN GR, 1983, HISTOCHEM J, V15, P543, DOI 10.1007/BF01954145
OGAWA H, 1971, J INVEST DERMATOL, V57, P111, DOI 10.1111/1523-1747.ep12349624
Parillo F, 2009, TISSUE CELL, V41, P257, DOI 10.1016/j.tice.2008.12.002
REYNOLDS ES, 1963, J CELL BIOL, V17, P208, DOI 10.1083/jcb.17.1.208
ROTH J, 1983, J HISTOCHEM CYTOCHEM, V31, P987, DOI 10.1177/31.8.6190857
Roth J, 1996, HISTOCHEM CELL BIOL, V106, P79
Sames K, 1999, HISTOCHEM J, V31, P739, DOI 10.1023/A:1003952616023
Sang YM, 2006, MAMM GENOME, V17, P332, DOI 10.1007/s00335-005-0158-0
SATO K, 1989, J AM ACAD DERMATOL, V20, P537, DOI 10.1016/S0190-9622(89)70063-3
Schauer R, 2004, ZOOLOGY, V107, P49, DOI 10.1016/j.zool.2003.10.002
Schauer R, 2009, CURR OPIN STRUC BIOL, V19, P507, DOI 10.1016/j.sbi.2009.06.003
Schroder JM, 1999, BIOCHEM PHARMACOL, V57, P121, DOI 10.1016/S0006-
2952(98)00226-3
Stumpf P, 2004, CELL TISSUE RES, V315, P59, DOI 10.1007/s00441-003-0815-0
Stumpf P, 2002, CELLS TISSUES ORGANS, V171, P215, DOI 10.1159/000063714
Suzuki Y, 2005, BIOL PHARM BULL, V28, P399, DOI 10.1248/bpb.28.399
TSUKISE A, 1983, ACTA ANAT, V115, P141
Varki A, 2009, ESSENTIALS GLYCOBIOL, P199
WATSON ML, 1958, J BIOPHYS BIOCHEM CY, V4, P475, DOI 10.1083/jcb.4.4.475
YAMADA K, 1993, HISTOCHEM J, V25, P95, DOI 10.1007/BF00157980
Yang D, 2001, CELL MOL LIFE SCI, V58, P978, DOI 10.1007/PL00000914
Yasui T, 2005, ANAT HISTOL EMBRYOL, V34, P56, DOI 10.1111/j.1439-
0264.2004.00577.x
Yasui T, 2004, EUR J HISTOCHEM, V48, P393
Yasui T, 2010, ACTA HISTOCHEM, V112, P169, DOI 10.1016/j.acthis.2008.10.004
NR 44
TC 0
Z9 0
U1 0
U2 3
PU WILEY-BLACKWELL
PI HOBOKEN
PA 111 RIVER ST, HOBOKEN 07030-5774, NJ USA
SN 1059-910X
J9 MICROSC RES TECHNIQ
JI Microsc. Res. Tech.
PD JAN
PY 2013
VL 76
IS 1
BP 12
EP 19
DI 10.1002/jemt.22129
PG 8
WC Anatomy & Morphology; Biology; Microscopy
SC Anatomy & Morphology; Life Sciences & Biomedicine - Other Topics;
Microscopy
GA 079LR
UT WOS:000314172400003
PM 23032992
DA 2018-01-22
ER

PT J
AU Akino, Y
Das, IJ
Bartlett, GK
Zhang, HL
Thompson, E
Zook, JE
AF Akino, Yuichi
Das, Indra J.
Bartlett, Gregory K.
Zhang, Hualin
Thompson, Elizabeth
Zook, Jennifer E.
TI Evaluation of superficial dosimetry between treatment planning system
and measurement for several breast cancer treatment techniques
SO MEDICAL PHYSICS
LA English
DT Article
DE breast radiotherapy; superficial dose; buildup dose; algorithm
ID MONTE-CARLO EVALUATION; GAFCHROMIC EBT2 FILM; 20-YEAR FOLLOW-UP; ENERGY
X-RAYS; RADIOCHROMIC FILM; CONSERVING SURGERY; SURFACE DOSIMETRY;
OBLIQUE INCIDENCE; RADIOTHERAPY; VERIFICATION
AB Purpose: Dosimetric accuracy in radiation treatment of breast cancer is critical
for the evaluation of cosmetic outcomes and survival. It is often considered that
treatment planning systems (TPS) may not be able to provide accurate dosimetry in
the buildup region. This was investigated in various treatment techniques such as
tangential wedges, field-in-field (FF), electronic compensator (eComp), and
intensity-modulated radiotherapy (IMRT).
Methods: Under Institutional Review Board (IRB) exemption, radiotherapy
treatment plans of 111 cases were retrospectively analyzed. The distance between
skin surface and 95% isodose line was measured. For measurements, Gafchromic EBT2
films were used on a humanoid unsliced phantom. Multiple layers of variable
thickness of superflab bolus were placed on the breast phantom and CT scanned for
planning. Treatment plans were generated using four techniques with two different
grid sizes (1 x 1 and 2.5 x 2.5 mm(2)) to provide optimum dose distribution. Films
were placed at different depths and exposed with the selected techniques. A
calibration curve for dose versus pixel values was also generated on the same day
as the phantom measurement was conducted. The DICOM RT image, dose, and plan data
were imported to the in-house software. On axial plane of CT slices, curves were
drawn at the position where EBT2 films were placed, and the dose profiles on the
lines were acquired. The calculated and measured dose profiles were separated by
check points which were marked on the films before irradiation. The segments of
calculated profiles were stretched to match their resolutions to that of film
dosimetry.
Results: On review of treatment plans, the distance between skin and 95%
prescribed dose was up to 8 mm for plans of 27 patients. The film measurement
revealed that the medial region of phantom surface received a mere 45%-50% of
prescribed dose. For wedges, FF, and eComp techniques, region around the nipple
received approximately 80% of prescribed dose, although only IMRT showed
inhomogeneous dose profile. At deeper depths mainly (6-11 mm depths), film
dosimetry showed good agreement with the TPS calculation. In contrast, the measured
dose at a 3-mm depth was higher than TPS calculation by 15%-30% for all techniques.
For the tangential and IMRT techniques, 1 x 1 mm2 grid size showed a smaller
difference than that with a 2.5 x 2.5 mm(2) grid size compared to the measurements.
Conclusions: In general, TPS even with advanced algorithms do not provide
accurate dosimetry in the buildup region, as verified by EBT2 film for all
treatment techniques. For all cases, TPS and measured doses were in agreement from
6 mm in depth but differed at shallower depths. Grid size plays an important role
in dose calculation. For accurate dosimetry small grid size should be used where
differences are lower between TPS and measurements. (C) 2013 American Association
of Physicists in Medicine. [http://dx.doi.org/10.1118/1.4770285]
C1 [Akino, Yuichi; Das, Indra J.; Bartlett, Gregory K.; Zhang, Hualin; Thompson,
Elizabeth; Zook, Jennifer E.] Indiana Univ Sch Med, Dept Radiat Oncol,
Indianapolis, IN 46202 USA.
[Akino, Yuichi] Osaka Univ, Grad Sch Med, Dept Radiat Oncol, Suita, Osaka
5650871, Japan.
RP Akino, Y (reprint author), Indiana Univ Sch Med, Dept Radiat Oncol,
Indianapolis, IN 46202 USA.
EM akino@radonc.med.osaka-u.ac.jp
RI Grams, Michael/G-5197-2011
OI Bartlett, Gregory K/0000-0002-0274-7760
FU Japan Society for the Promotion of Science (JSPS) Core-to-Core Program
[23003]
FX This work was supported by a grant from the Japan Society for the
Promotion of Science (JSPS) Core-to-Core Program (Grant No. 23003). The
authors report no conflicts of interest in conducting the research.
CR Almberg SS, 2011, RADIOTHER ONCOL, V100, P259, DOI 10.1016/j.radonc.2011.05.021
Andres C, 2010, MED PHYS, V37, P6271, DOI 10.1118/1.3512792
Baird C T, 2001, J Appl Clin Med Phys, V2, P73, DOI 10.1120/1.1359296
Bragg CM, 2006, RADIOTHER ONCOL, V81, P315, DOI 10.1016/j.radonc.2006.10.020
Butson MJ, 1996, PHYS MED BIOL, V41, P1073, DOI 10.1088/0031-9155/41/6/011
Chakarova R, 2012, RADIOTHER ONCOL, V102, P102, DOI 10.1016/j.radonc.2011.06.021
Chow JCL, 2011, J APPL CLIN MED PHYS, V12, P108, DOI 10.1120/jacmp.v12i1.3424
Darby S, 2011, LANCET, V12, P1707, DOI DOI 10.1016/S0140-6736(11)61629-2
Das IJ, 1997, RADIOTHER ONCOL, V44, P83, DOI 10.1016/S0167-8140(97)00054-6
de la Torre Ninet, 2004, Med Dosim, V29, P109, DOI 10.1016/j.meddos.2004.03.002
Devic S, 2006, MED PHYS, V33, P1116, DOI 10.1118/1.2179169
Dogan N, 2003, MED PHYS, V30, P3091, DOI 10.1118/1.1625116
Fisher B, 2002, NEW ENGL J MED, V347, P1233, DOI 10.1056/NEJMoa022152
Hartmann B, 2010, MED PHYS, V37, P1753, DOI 10.1118/1.3368601
ICRP, 1991, INT COMM RAD PROT PU, V60
ICRU, 1985, 39 ICRU
JACKSON W, 1971, BRIT J RADIOL, V44, P109, DOI 10.1259/0007-1285-44-518-109
Kelly A, 2011, PHYS MED BIOL, V56, P1001, DOI 10.1088/0031-9155/56/4/008
Khan FM, 2003, PHYS RAD THERAPY
Kim S, 1998, MED PHYS, V25, P860, DOI 10.1118/1.598261
Li ZF, 1997, INT J RADIAT ONCOL, V37, P921, DOI 10.1016/S0360-3016(96)00610-4
Lindsay P, 2010, MED PHYS, V37, P571, DOI 10.1118/1.3291622
Mizuno H, 2012, J APPL CLIN MED PHYS, V13, P198, DOI 10.1120/jacmp.v13i4.3763
Nakano M, 2012, J APPL CLIN MED PHYS, V13, P83, DOI 10.1120/jacmp.v13i3.3727
Oinam AS, 2010, J APPL CLIN MED PHYS, V11, P105, DOI 10.1120/jacmp.v11i4.3351
ORTON CG, 1971, BRIT J RADIOL, V44, P895, DOI 10.1259/0007-1285-44-527-895-b
Polednik M, 2007, STRAHLENTHER ONKOL, V183, P667, DOI 10.1007/s00066-007-1775-1
Richley L, 2010, PHYS MED BIOL, V55, P2601, DOI 10.1088/0031-9155/55/9/012
Santiago RJ, 2004, INT J RADIAT ONCOL, V58, P233, DOI 10.1016/S0360-
3016(03)01460-3
Shiau AC, 2012, MED DOSIM, V37, P417, DOI 10.1016/j.meddos.2012.03.005
Van Esch A, 2006, MED PHYS, V33, P4130, DOI 10.1118/1.2358333
Veronesi U, 2002, NEW ENGL J MED, V347, P1227, DOI 10.1056/NEJMoa020989
NR 32
TC 12
Z9 12
U1 0
U2 19
PU AMER ASSOC PHYSICISTS MEDICINE AMER INST PHYSICS
PI MELVILLE
PA STE 1 NO 1, 2 HUNTINGTON QUADRANGLE, MELVILLE, NY 11747-4502 USA
SN 0094-2405
J9 MED PHYS
JI Med. Phys.
PD JAN
PY 2013
VL 40
IS 1
AR 011714
DI 10.1118/1.4770285
PG 6
WC Radiology, Nuclear Medicine & Medical Imaging
SC Radiology, Nuclear Medicine & Medical Imaging
GA 063WR
UT WOS:000313033200018
PM 23298084
DA 2018-01-22
ER

PT J
AU Alnajjar, F
Yamashita, Y
Tani, J
AF Alnajjar, Fady
Yamashita, Yuichi
Tani, Jun
TI The hierarchical and functional connectivity of higher-order cognitive
mechanisms: neurorobotic model to investigate the stability and
flexibility of working memory
SO FRONTIERS IN NEUROROBOTICS
LA English
DT Article
DE higher-order cognitive tasks; brain modeling; cognitive branching task;
multi-timescale recurrent neural network; working memory; frontal lobe
function
AB Higher-order cognitive mechanisms (HOCM), such as planning, cognitive branching,
switching, etc., are known to be the outcomes of a unique neural organizations and
dynamics between various regions of the frontal lobe. Although some recent
anatomical and neuroimaging studies have shed light on the architecture underlying
the formation of such mechanisms, the neural dynamics and the pathways in and
between the frontal lobe to form and/or to tune the stability level of its working
memory remain controversial. A model to clarify this aspect is therefore required.
In this study, we propose a simple neurocomputational model that suggests the basic
concept of how HOCM, including the cognitive branching and switching in particular,
may mechanistically emerge from time-based neural interactions. The proposed model
is constructed such that its functional and structural hierarchy mimics, to a
certain degree, the biological hierarchy that is believed to exist between local
regions in the frontal lobe. Thus, the hierarchy is attained not only by the force
of the layout architecture of the neural connections but also through distinct
types of neurons, each with different time properties. To validate the model,
cognitive branching and switching tasks were simulated in a physical humanoid robot
driven by the model. Results reveal that separation between the lower and the
higher-level neurons in such a model is an essential factor to form an appropriate
working memory to handle cognitive branching and switching. The analyses of the
obtained result also illustrates that the breadth of this separation is important
to determine the characteristics of the resulting memory, either static memory or
dynamic memory. This work can be considered as a joint research between synthetic
and empirical studies, which can open an alternative research area for better
understanding of brain mechanisms.
C1 [Alnajjar, Fady] RIKEN, Brain & Sci Inst, Wako, Saitama, Japan.
[Yamashita, Yuichi] JST, ERATO, Okanoya Emot Informat Project, Wako, Saitama,
Japan.
[Tani, Jun] Korea Adv Inst Sci & Technol, Dept Elect Engn, Taejon 305701, South
Korea.
RP Tani, J (reprint author), Korea Adv Inst Sci & Technol, Dept Elect Engn, 291
Daehak Ro,373-1 Guseong Dong, Taejon 305701, South Korea.
EM tani@brain.riken.jp
FU Converging Research Center Program - Ministry of Education, Science, and
Technology [2012K001342]
FX We thank Jun Namikawa for discussions. Use of the robot was made
possible through cooperation with SONY Corporation. This work was
supported by the Converging Research Center Program funded by the
Ministry of Education, Science, and Technology (2012K001342).
CR Badre D, 2009, NAT REV NEUROSCI, V10, P659, DOI 10.1038/nrn2667
Botvinick MM, 2008, TRENDS COGN SCI, V12, P201, DOI 10.1016/j.tics.2008.02.009
Botvinick MM, 2006, PSYCHOL REV, V113, P201, DOI 10.1037/0033-295X.113.2.201
Braver TS, 2002, NEUROIMAGE, V15, P523, DOI 10.1006/nimg.2001.1019
Channon S, 1999, J NEUROL NEUROSUR PS, V66, P162, DOI 10.1136/jnnp.66.2.162
Doya K., 2001, NEUROSCI RES S 24, pS16
Elliott R, 1996, PSYCHOL MED, V26, P975, DOI 10.1017/S0033291700035303
ELMAN JL, 1990, COGNITIVE SCI, V14, P179, DOI 10.1016/0364-0213(90)90002-E
Grafman J., 1995, STRUCTURE FUNCTION H, V769, P337
Hagmann P, 2008, PLOS BIOL, V6, P1479, DOI 10.1371/journal.pbio.0060159
HERTZ J, 1991, INTRODUCTION TO THE
Johnson JS, 2009, PSYCHOL SCI, V20, P568, DOI 10.1111/j.1467-9280.2009.02329.x
Koechlin E, 1999, NATURE, V399, P148
Koechlin E, 2007, SCIENCE, V318, P594, DOI 10.1126/science.1142995
KOHONEN T, 1996, SELF ORGANIZING MAPS
Konishi S, 2000, NEUROIMAGE, V12, P276, DOI 10.1006/nimg.2000.0614
Lepage M, 2000, P NATL ACAD SCI USA, V97, P506, DOI 10.1073/pnas.97.1.506
Maniadakis M, 2012, NEURAL NETWORKS, V33, P76, DOI 10.1016/j.neunet.2012.04.005
McDermott KB, 2000, J COGNITIVE NEUROSCI, V12, P965, DOI
10.1162/08989290051137503
Miller E. K., 2009, ENCYCLOPEDIA NEUROSC, V4, P99
Nishimoto R, 2008, ADAPT BEHAV, V16, P166, DOI 10.1177/1059712308089185
Paine RW, 2005, ADAPT BEHAV, V13, P211, DOI 10.1177/105971230501300303
Rumelhart D. E., 1986, PARALLEL DISTRIBUTED
VARELA F., 1991, EMBODIED MIND
Walsh ND, 2009, BMC PSYCHIATRY, V9, DOI 10.1186/1471-244X-9-69
Yamashita Y, 2012, PLOS ONE, V7, DOI 10.1371/journal.pone.0037843
Yamashita Y, 2008, PLOS COMPUT BIOL, V4, DOI 10.1371/journal.pcbi.1000220
NR 27
TC 3
Z9 3
U1 0
U2 4
PU FRONTIERS RESEARCH FOUNDATION
PI LAUSANNE
PA PO BOX 110, LAUSANNE, 1015, SWITZERLAND
SN 1662-5218
J9 FRONT NEUROROBOTICS
JI Front. Neurorobotics
PY 2013
VL 7
AR UNSP 2
DI 10.3389/fnbot.2013.00002
PG 13
WC Computer Science, Artificial Intelligence; Robotics; Neurosciences
SC Computer Science; Robotics; Neurosciences & Neurology
GA V39VD
UT WOS:000209437600002
PM 23423881
OA gold
DA 2018-01-22
ER

PT J
AU Urgen, BA
Plank, M
Ishiguro, H
Poizner, H
Saygin, AP
AF Urgen, Burcu A.
Plank, Markus
Ishiguro, Hiroshi
Poizner, Howard
Saygin, Ayse P.
TI EEG theta and Mu oscillations during perception of human and robot
actions
SO FRONTIERS IN NEUROROBOTICS
LA English
DT Article
DE EEG; action perception; social robotics; mirror neuron system; mu
rhythm; theta rhythm
AB The perception of others' actions supports important skills such as
communication, intention understanding, and empathy. Are mechanisms of action
processing in the human brain specifically tuned to process biological agents?
Humanoid robots can perform recognizable actions, but can look and move differently
from humans, and as such, can be used in experiments to address such questions.
Here, we recorded EEG as participants viewed actions performed by three agents. In
the Human condition, the agent had biological appearance and motion. The other two
conditions featured a state-of-the-art robot in two different appearances: Android,
which had biological appearance but mechanical motion, and Robot, which had
mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and
frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in
particular for whether the visual appearance and/or the motion of the observed
agent was biological. Sensorimotor mu suppression has been linked to the motor
simulation aspect of action processing (and the human mirror neuron system, MNS),
and frontal theta to semantic and memory-related aspects. For all three agents,
action observation induced significant attenuation in the power of mu oscillations,
with no difference between agents. Thus, mu suppression, considered an index of MNS
activity, does not appear to be selective for biological agents. Observation of the
Robot resulted in greater frontal theta activity compared to the Android and the
Human, whereas the latter two did not differ from each other. Frontal theta thus
appears to be sensitive to visual appearance, suggesting agents that are not
sufficiently biological in appearance may result in greater memory processing
demands for the observer. Studies combining robotics and neuroscience such as this
one can allow us to explore neural basis of action processing on the one hand, and
inform the design of social robots on the other.
C1 [Urgen, Burcu A.; Saygin, Ayse P.] Univ Calif San Diego, Dept Cognit Sci, La
Jolla, CA 92093 USA.
[Urgen, Burcu A.; Saygin, Ayse P.] Univ Calif San Diego, Calif Inst Telecommun &
Informat Technol, Qualcomm Inst, San Diego, CA 92103 USA.
[Plank, Markus; Poizner, Howard] Univ Calif San Diego, Inst Neural Computat, San
Diego, CA 92103 USA.
[Ishiguro, Hiroshi] Osaka Univ, Osaka, Japan.
[Ishiguro, Hiroshi] Adv Telecommun Res, Keihanna Sci City, Japan.
[Poizner, Howard; Saygin, Ayse P.] Univ Calif San Diego, Neurosci Program, San
Diego, CA 92103 USA.
RP Urgen, BA (reprint author), Univ Calif San Diego, Dept Cognit Sci, 9500 Gilman
Dr, La Jolla, CA 92093 USA.
EM burgen@cogsci.ucsd.edu
FU Qualcomm Institute; Strategic Research Opportunities Award; Kavli
Institute for Brain and Mind; NSF (CAREER Award ) [BCS-1151805]
FX This research was supported by the Qualcomm Institute (formerly
California Institute of Telecommunications and Information Technology),
Strategic Research Opportunities Award to Ayse Saygin, fellowship for
Burcu. A. Urgen), -Kavli Institute for Brain and Mind (Innovative
Research Award to Ayse P. Saygin), NSF (CAREER Award BCS-1151805 to Ayse
R Saygin, and SBE0542013 to Temporal Dynamics of Learning Center), DARPA
(Ayse P. Saygin), and ONR (MURI Award N00014-10-1-0072 to Howard
Poizner). We thank Arthur Vigil and Joe Snider or assistance with the
experimental setup, Intelligent Robotics Laboratory at Osaka University
for help in the preparation of the stimuli, and Alvin Li, Wayne -Khoe,
Marta Kutas, Sea.na Coulson, Jamie Pineda, Chris Berka, and Scott Makeig
for helpful discussion and feedback,
CR Arnstein D, 2011, J NEUROSCI, V31, P14243, DOI 10.1523/JNEUROSCI.0963-11.2011
Atienza M, 2011, J COGNITIVE NEUROSCI, V23, P75, DOI 10.1162/jocn.2009.21358
Babiloni C, 2002, NEUROIMAGE, V17, P559, DOI 10.1006/nimg.2002.1192
Barresi J, 1996, BEHAV BRAIN SCI, V19, P107, DOI 10.1017/S0140525X00041790
Bastiaansen MCM, 2008, BRAIN LANG, V106, P15, DOI 10.1016/j.bandl.2007.10.006
Billard A, 2007, ASSIST TECHNOL, V19, P37, DOI 10.1080/10400435.2007.10131864
Blakemore SJ, 2001, NAT REV NEUROSCI, V2, P561
Braadbaart L, 2013, INT J PSYCHOPHYSIOL, V89, P99, DOI
10.1016/j.ijpsycho.2013.05.019
Buccino G, 2004, J COGNITIVE NEUROSCI, V16, P114, DOI 10.1162/089892904322755601
Calvo-Merino B, 2006, CURR BIOL, V16, P1905, DOI 10.1016/j.cub.2006.07.065
Carmo JC, 2012, PLOS ONE, V7, DOI 10.1371/journal.pone.0046939
Casile A, 2006, CURR BIOL, V16, P69, DOI 10.1016/j.cub.2005.10.071
Casile A, 2010, CEREB CORTEX, V20, P1647, DOI 10.1093/cercor/bhp229
Cattaneo L, 2010, CEREB CORTEX, V20, P2252, DOI 10.1093/cercor/bhp291
Chaminade T, 2007, SOC COGN AFFECT NEUR, V2, P206, DOI 10.1093/scan/nsm017
Chaminade T, 2006, INTERACT STUD, V7, P347, DOI 10.1075/is.7.3.07cha
Chang CC, 2011, ACM T INTEL SYST TEC, V2, DOI 10.1145/1961189.1961199
Cheetham M, 2011, FRONT HUM NEUROSCI, V5, DOI 10.3389/fnhum.2011.00126
Cochin S, 1999, EUR J NEUROSCI, V11, P1839, DOI 10.1046/j.1460-9568.1999.00598.x
COOK R, BEHAV BRAIN IN PRESS
Coradeschi S, 2006, IEEE INTELL SYST, V21, P74, DOI 10.1109/MIS.2006.72
CORTES C, 1995, MACH LEARN, V20, P273, DOI 10.1023/A:1022627411411
Crespo-Garcia M, 2010, NEUROIMAGE, V50, P1258, DOI
10.1016/j.neuroimage.2010.01.018
Cross ES, 2012, HUM BRAIN MAPP, V33, P2238, DOI 10.1002/hbm.21361
Dautenhahn K, 2007, PHILOS T R SOC B, V362, P679, DOI 10.1098/rstb.2006.2004
Davidson DJ, 2007, BRAIN RES, V1158, P81, DOI 10.1016/j.brainres.2007.04.082
Delorme A, 2004, J NEUROSCI METH, V134, P9, DOI 10.1016/j.jneumeth.2003.10.009
DIPELLEGRINO G, 1992, EXP BRAIN RES, V91, P176, DOI 10.1007/BF00230027
Dumas G, 2012, FRONT HUM NEUROSCI, V6, DOI 10.3389/fnhum.2012.00128
FADIGA L, 1995, J NEUROPHYSIOL, V73, P2608
Gallese V, 1996, BRAIN, V119, P593, DOI 10.1093/brain/119.2.593
Gazzola V, 2007, NEUROIMAGE, V35, P1674, DOI 10.1016/j.neuroimage.2007.02.003
Grafton ST, 2007, HUM MOVEMENT SCI, V26, P590, DOI 10.1016/j.humov.2007.05.009
Hald LA, 2006, BRAIN LANG, V96, P90, DOI 10.1016/j.bandl.2005.06.007
Hari R, 1998, P NATL ACAD SCI USA, V95, P15061, DOI 10.1073/pnas.95.25.15061
Hari R, 2006, PROG BRAIN RES, V159, P253, DOI 10.1016/S0079-6123(06)59017-X
Haynes JD, 2006, NAT REV NEUROSCI, V7, P523, DOI 10.1038/nrn1931
Ho C.-C., 2008, P 3 ACM IEEE INT C H
Iacoboni M, 2006, NAT REV NEUROSCI, V7, P942, DOI 10.1038/nrn2024
Ishiguro H, 2006, CONNECT SCI, V18, P319, DOI 10.1080/09540090600873953
Kahana MJ, 2001, CURR OPIN NEUROBIOL, V11, P739, DOI 10.1016/S0959-
4388(01)00278-1
Kamitani Y, 2005, NAT NEUROSCI, V8, P679, DOI 10.1038/nn1444
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kilner JM, 2009, PLOS ONE, V4, DOI 10.1371/journal.pone.0004925
Kilner JM, 2003, CURR BIOL, V13, P522, DOI 10.1016/S0960-9822(03)00165-9
Klimesch W, 2010, NEUROSCI BIOBEHAV R, V34, P1002, DOI
10.1016/j.neubiorev.2009.10.004
Knoblich G., 2006, HUMAN BODY PERCEPTIO
Kutas M, 2011, ANNU REV PSYCHOL, V62, P621, DOI
10.1146/annurev.psych.093008.131123
Lewkowicz DJ, 2012, DEV PSYCHOBIOL, V54, P124, DOI 10.1002/dev.20583
MacDorman KF, 2006, INTERACT STUD, V7, P297, DOI 10.1075/is.7.3.03mac
MacDorman KF, 2009, AI SOC, V23, P485, DOI 10.1007/s00146-008-0181-2
Mataric M, 2009, STUD HEALTH TECHNOL, V145, P249, DOI 10.3233/978-1-60750-018-6-
249
Mizuhara H., 2011, ADV COGNITIVE NEUROD, P123
Mori M., 1970, ENERGY, V7, P33, DOI DOI 10.1109/MRA.2012.2192811
Mueller-Putz G., 2008, INT J BIOELECTROMAGN, V10, P52
Naeem M, 2012, NEUROIMAGE, V59, P1795, DOI 10.1016/j.neuroimage.2011.08.010
Norman KA, 2006, TRENDS COGN SCI, V10, P424, DOI 10.1016/j.tics.2006.07.005
Oberman LM, 2007, NEUROCOMPUTING, V70, P2194, DOI 10.1016/j.neucom.2006.02.024
Orgs G, 2008, EUR J NEUROSCI, V27, P3380, DOI 10.1111/j.1460-9568.2008.06271.x
Osipova D, 2006, J NEUROSCI, V26, P7523, DOI 10.1523/JNEUROSCI.1948-06.2006
Pelphrey KA, 2003, J NEUROSCI, V23, P6819
Pereira F, 2009, NEUROIMAGE, V45, P199, DOI DOI 10.1016/J.NEUROIMAGE.2008.11.007
Perry A, 2009, BRAIN RES, V1282, P126, DOI 10.1016/j.brainres.2009.05.059
Pineda JA, 2005, BRAIN RES REV, V50, P57, DOI 10.1016/j.brainresrev.2005.04.005
Press C, 2007, P ROY SOC B-BIOL SCI, V274, P2509, DOI 10.1098/rspb.2007.0774
Press C, 2011, J NEUROSCI, V31, P2792, DOI 10.1523/JNEUROSCI.1595-10.2011
Rizzolatti G, 2001, NAT REV NEUROSCI, V2, P661, DOI 10.1038/35090060
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
Saygin AP, 2002, J PRAGMATICS, V34, P227, DOI 10.1016/S0378-2166(02)80001-7
Saygin AP, 2004, J NEUROSCI, V24, P6181, DOI 10.1523/JNEUROSCI.0504-04.2004
Saygin A. P., 2011, ROBOTICS SYSTEMS SCI
Saygin AP, 2007, BRAIN, V130, P2452, DOI 10.1093/brain/awm162
Saygin AP, 2012, SOC COGN AFFECT NEUR, V7, P413, DOI 10.1093/scan/nsr025
Saygin AP, 2012, PSYCHOL RES-PSYCH FO, V76, P388, DOI 10.1007/s00426-012-0426-z
Shahin AJ, 2009, BRAIN COGNITION, V70, P259, DOI 10.1016/j.bandc.2009.02.008
Silas J, 2012, INT J PSYCHOPHYSIOL, V85, P168, DOI
10.1016/j.ijpsycho.2012.05.008
Sitnikova T, 2008, J COGNITIVE NEUROSCI, V20, P2037, DOI 10.1162/jocn.2008.20143
Steckenfinger SA, 2009, P NATL ACAD SCI USA, V106, P18362, DOI
10.1073/pnas.0910063106
Steele VR, 2013, BRAIN RES, V1492, P92, DOI 10.1016/j.brainres.2012.11.016
Stefan K, 2005, J NEUROSCI, V25, P9339, DOI [10.1523/JNEUROSCI.2282-05.2005,
10.1523/JNEUROSCI.2282.05.2005]
Tai YF, 2004, CURR BIOL, V14, P117, DOI 10.1016/j.cub.2004.01.005
Thompson JC, 2011, PERCEPTION, V40, P695, DOI 10.1068/p6900
Tinwell A, 2011, COMPUT HUM BEHAV, V27, P741, DOI 10.1016/j.chb.2010.10.018
Tognoli E, 2007, P NATL ACAD SCI USA, V104, P8190, DOI 10.1073/pnas.0611453104
Ugur E., 2011, ROB AUT ICRA IEEE IN
Umilta MA, 2001, NEURON, V31, P155, DOI 10.1016/S0896-6273(01)00337-3
Urgen B. A., 2012, 34 ANN C COGN SCI SO
van Kemenade BM, 2012, J COGNITIVE NEUROSCI, V24, P896, DOI 10.1162/jocn_a_00194
Wermter S, 2003, NEURAL NETWORKS, V16, P691, DOI 10.1016/S0893-6080(03)00100-X
Wu YC, 2011, BRAIN LANG, V119, P184, DOI 10.1016/j.bandl.2011.07.002
Zion-Golumbic E, 2010, J COGNITIVE NEUROSCI, V22, P263, DOI
10.1162/jocn.2009.21251
NR 91
TC 16
Z9 17
U1 6
U2 15
PU FRONTIERS RESEARCH FOUNDATION
PI LAUSANNE
PA PO BOX 110, LAUSANNE, 1015, SWITZERLAND
SN 1662-5218
J9 FRONT NEUROROBOTICS
JI Front. Neurorobotics
PY 2013
VL 7
AR UNSP 19
DI 10.3389/fnbot.2013.00019
PG 13
WC Computer Science, Artificial Intelligence; Robotics; Neurosciences
SC Computer Science; Robotics; Neurosciences & Neurology
GA V36UW
UT WOS:000209236900009
PM 24348375
OA gold
DA 2018-01-22
ER

PT J
AU Komizunai, S
Konno, A
Abiko, S
Jiang, X
Uchiyama, M
AF Komizunai, Shunsuke
Konno, Atsushi
Abiko, Satoko
Jiang, Xin
Uchiyama, Masaru
TI DYNAMIC SIMULATION OF BIPED WALKING ON LOOSE SOIL
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Biped walking; loose soil; terramechanics
ID UNEVEN TERRAIN; MECHANISM
AB Most of the studies on biped walking on even or uneven terrain have assumed
stiff ground. This paper proposes a dynamic contact model between foot and loose
soil. The proposed contact model provides sinkage of the foot, slip of the sole and
reactive force acting on the foot on loose soil. Sinkage of the foot and slip of
the sole are calculated utilizing terramechanics model, which are important
characteristics for biped robot to walk on loose soil. Reactive force acting on the
foot on loose soil is calculated using spring-damper model between the foot and the
deformed ground. By applying the proposed contact model to a usual dynamics
simulator, dynamic sinkage and slip phenomenon during biped walking on loose soil
are simulated. Additionally, in order to verify the simulation result, experiments
were carried out using a humanoid robot.
C1 [Komizunai, Shunsuke; Konno, Atsushi; Abiko, Satoko; Jiang, Xin; Uchiyama,
Masaru] Tohoku Univ, Grad Sch Engn, Dept Aerosp Engn, Sendai, Miyagi 9808579,
Japan.
RP Komizunai, S (reprint author), Tohoku Univ, Grad Sch Engn, Dept Aerosp Engn, 6-
6-01 Aoba Yama, Sendai, Miyagi 9808579, Japan.
EM shunsuke@space.mech.tohoku.ac.jp; konno@space.mech.tohoku.ac.jp;
abiko@space.mech.tohoku.ac.jp; jiangxin@space.mech.tohoku.ac.jp;
uchiyama@space.mech.tohoku.ac.jp
FU JSPS [21300073]
FX This work was supported by JSPS Grant-in-Aid for Scientific Research (B)
(21300073).
CR Bui H. H., 2003, TERRAMECHANICS, V46, P115
Ishigami G., 2006, IEEE RSJ INT C INT R, P5552
Ito N., 1973, JPN SOC AGR MACH, V35, P238
Janosi Z., 1961, 1 INT C SOIL VEH SYS, P707
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Kang H., 2010, 28 ANN C ROB SOC JAP
Kang HJ, 2010, IEEE INT CONF ROBOT, P5167, DOI 10.1109/ROBOT.2010.5509348
Komizunai S., 2010, 7 INT C FLOW DYN ICF, P608
Komizunai S., 2010, IEEE SICE INT S SYST, pA3
Kuroki Y., 2004, IEEE INT C ROB AUT I, P3189
Muro T., 1989, TERRAMECHANICS, V26, P249
Muro T., 1993, TERRAMECHANICS, P6
Nakashima H, 2011, J TERRAMECHANICS, V48, P17, DOI 10.1016/j.jterra.2010.09.002
Nishiwaki K, 2007, IEEE-RAS INT C HUMAN, P447, DOI 10.1109/ICHR.2007.4813908
Shahnazari H, 2008, INT J CIV ENG, V6, P1
Xenaki VC, 2001, GEOSYNTH INT, V8, P471, DOI 10.1680/gein.8.0204
Yoo Y., 2009, 11 EUR REG C INT SOC
NR 18
TC 0
Z9 0
U1 0
U2 8
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD DEC
PY 2012
VL 9
IS 4
AR 1250032
DI 10.1142/S0219843612500326
PG 20
WC Robotics
SC Robotics
GA 076QG
UT WOS:000313971600008
DA 2018-01-22
ER

PT J
AU Ugurlu, B
Saglia, JA
Tsagarakis, NG
Caldwell, DG
AF Ugurlu, Barkan
Saglia, Jody A.
Tsagarakis, Nikos G.
Caldwell, Darwin G.
TI YAW MOMENT COMPENSATION FOR BIPEDAL ROBOTS VIA INTRINSIC ANGULAR
MOMENTUM CONSTRAINT
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Humanoid; bipedal locomotion; yaw moment; angular momentum
ID LOCOMOTION
AB This paper is aimed at describing a technique to compensate undesired yaw
moment, which is inevitably induced about the support foot during single support
phases while a bipedal robot is in motion. The main strategy in this method is to
rotate the upper body in a way to exert a secondary moment that counteracts to the
factors which create the undesired moment. In order to compute the yaw moment by
considering all the factors, we utilized Eulerian ZMP Resolution, as it is capable
of characterizing the robot's rotational inertia, a crucial component of its
dynamics. In doing so, intrinsic angular momentum rate changes are smoothly
included in yaw moment equations. Applying the proposed technique, we conducted
several bipedal walking experiments using the actual bipedal robot CoMan. As the
result, we obtained 61% decrease in undesired yaw moment and 82% regulation in yaw-
axis deviation, which satisfactorily verify the efficiency of the proposed
approach, in comparison to off-the-shelf techniques.
C1 [Ugurlu, Barkan] Toyota Technol Inst, Dept Adv Sci & Technol, Tempa Ku, Nagoya,
Aichi 4688511, Japan.
[Saglia, Jody A.; Tsagarakis, Nikos G.; Caldwell, Darwin G.] Ist Italiano
Tecnol, Dept Adv Robot, I-16163 Genoa, Italy.
RP Ugurlu, B (reprint author), Toyota Technol Inst, Dept Adv Sci & Technol, Tempa
Ku, 2-12-1 Hisakata, Nagoya, Aichi 4688511, Japan.
EM barkanu@ieee.org; jody.saglia@iit.it; nikos.tsagarakis@iit.it;
darwin.caldwell@iit.it
RI Ugurlu, Barkan/A-4793-2015
OI Ugurlu, Barkan/0000-0002-9124-7441
FU European Commission [ICT-2009-4]
FX This work is supported by the European Commission FP7, "AMARSi" Project
ICT-2009-4. Authors would like to thank Gianluca Pane, Phil Hudson and
Stephen Morfey for their helpful assistance. Bipedal robot presented in
this paper was first named "cCub",<SUP>28</SUP> then its name was
changed to CoMan. Figures 1 and 2 were originally created by Takahiro
Hirabayashi and edited by Barkan Ugurlu, upon Takahiro Hirabayashi's
permission. This work was completed while Barkan Ugurlu was with Italian
Institute of Technology, as a post-doctoral research fellow.
CR Aoustin Y, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P2922, DOI
10.1109/IROS.2008.4650725
Balafoutis C. A., 1991, DYNAMIC ANAL ROBOT M, P47
Balafoutis C. A., 1991, IEEE T SYST MAN CYB, V19, P1313, DOI 10.1109/21.44054
Buschmann T., 2010, THESIS TU MUNICH GER
Featherstone R., 2008, RIGID BODY DYNAMICS
Fujimoto Y, 1997, AS CONTR C SEOUL, P327
Goswami A, 1999, INT J ROBOT RES, V18, P523, DOI 10.1177/02783649922066376
Goswami A., 2008, IEEE RAS INT C HUM R, P182
Hanamiya D., 2004, IEEE INT WORKSH ADV, P403
Hirabayashi T, 2008, AMC '08: 10TH INTERNATIONAL WORKSHOP ON ADVANCED MOTION
CONTROL, VOLS 1 AND 2, PROCEEDINGS, P296
Hofmann A., 2009, IEEE INT C ROB AUT I, P4423
Kajita S, 2007, IEEE INT CONF ROBOT, P3963, DOI 10.1109/ROBOT.2007.364087
Kormushev P, 2011, IEEE INT C INT ROBOT, P318, DOI 10.1109/IROS.2011.6048037
Miura K, 2010, IEEE INT CONF ROBOT, P4249, DOI 10.1109/ROBOT.2010.5509520
Nagasaka K., 1999, IEEE SMC'99 Conference Proceedings. 1999 IEEE International
Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028), P908, DOI
10.1109/ICSMC.1999.816673
Orin DE, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT
SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P653, DOI 10.1109/IROS.2008.4650772
Park J, 2008, J BIOMECH, V41, P1417, DOI 10.1016/j.jbiomech.2008.02.031
Park JH, 2001, IEEE T ROBOTIC AUTOM, V17, P870, DOI 10.1109/70.976014
Popovic MB, 2005, INT J ROBOT RES, V24, P1013, DOI 10.1177/0278364905058363
Press W. H, 1992, NUMERICAL RECIPES C
Nakamura Y., 2007, IEEE INT C INT ROB S, P444
Suleiman W, 2010, IEEE T ROBOT, V26, P458, DOI 10.1109/TRO.2010.2047531
Tajima R., 2009, IEEE INT C ROB AUT, P1571
Tsagarakis N.G., 2011, IEEE INT C ROB AUT, P2035
Ueda J, 2004, IEEE-RAS INT C HUMAN, P592
Ugurlu B., 2010, IEEE INT C HUM ROB H, P468
Ugurlu B., 2012, IEEE T ROBOT, V28
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
YAMAGUCHI J, 1993, IROS 93 : PROCEEDINGS OF THE 1993 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOL 1-3, P561, DOI
10.1109/IROS.1993.583168
YAMAGUCHI J, 1999, IEEE INT C ROB AUT, V1, P368
Yang J., 2006, IEEE INT C INT ROB S, P4441
Yoshida E, 2008, LECT NOTES COMPUT SC, V5277, P210
Zhu C., 2006, IEEE C INT ROB SYST, P5762
NR 33
TC 9
Z9 9
U1 0
U2 4
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD DEC
PY 2012
VL 9
IS 4
AR 1250033
DI 10.1142/S0219843612500338
PG 27
WC Robotics
SC Robotics
GA 076QG
UT WOS:000313971600009
DA 2018-01-22
ER
PT J
AU Ugurlu, B
Kawamura, A
AF Ugurlu, Barkan
Kawamura, Atsuo
TI Bipedal Trajectory Generation Based on Combining Inertial Forces and
Intrinsic Angular Momentum Rate Changes: Eulerian ZMP Resolution
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Angular momentum; biped robot; bipedal walking; humanoid; ZMP
ID HUMANOID-ROBOT; WALKING; LOCOMOTION; POINT
AB This paper aims to present a technique to generate feasible and dynamically
equilibrated ZMP-based center of mass trajectories that can be applied to bipedal
robots. In this regard, we utilize the ZMP concept in the spherical coordinate
frame so that we can fully exploit its properties as this strategy enables us to
efficiently combine intrinsic angular momentum rate change terms with inertial
force terms. That being said, this strategy has certain advantages: 1) In the case
of bipedal walking, undesired torso angle fluctuations are more restrainable
compared with other methods, in which angular momentum information is omitted or
zero-referenced. 2) Composite rigid body inertia, which is a multibody property of
robot dynamics, can be characterized during the trajectory generation task. Thus,
relatively more dynamically consistent trajectories may be obtained. 3) The motion
interference between the sagittal and lateral planes is naturally included. In this
paper, we mainly investigate the first two advantages. Applying the method
described above, we conducted bipedal walking experiments on our actual bipedal
robot MARI-3. As the result, we obtained repetitive, continuous, and dynamically
equilibrated walking cycles, in which undesired torso angles were well suppressed.
Furthermore, ZMP error tends to decrease since inertial parameters of the robot are
characterized. In conclusion, the method is validated to be efficient in inducing
less ZMP error and in suppressing undesired torso angle variations, compared with
both flywheel-superimposed and conventional ZMP-based trajectory generation
methods.
C1 [Ugurlu, Barkan] Toyota Technol Inst, Dept Adv Sci & Technol, Nagoya, Aichi
4688511, Japan.
[Kawamura, Atsuo] Yokohama Natl Univ, Grad Sch Engn, Yokohama, Kanagawa 2408501,
Japan.
RP Ugurlu, B (reprint author), Toyota Technol Inst, Dept Adv Sci & Technol, Nagoya,
Aichi 4688511, Japan.
EM barkanu@toyota-ti.ac.jp; kawamura@ynu.ac.jp
RI kawamura, atsuo/L-8158-2014; Ugurlu, Barkan/A-4793-2015
OI kawamura, atsuo/0000-0002-3085-2314; Ugurlu, Barkan/0000-0002-9124-7441
CR Asada H., 1986, ROBOT ANAL CONTROL, P136
Balafoutis C A, 1991, DYNAMIC ANAL ROBOT M
Balafoutis C. A., 1991, IEEE T SYST MAN CYB, V19, P1313, DOI 10.1109/21.44054
Buschmann T., 2010, THESIS TU MUNICH MUN
Chevallereau C, 2008, IEEE T ROBOT, V24, P390, DOI 10.1109/TRO.2007.913563
Cho BK, 2011, ADV ROBOTICS, V25, P1209, DOI 10.1163/016918611X574687
Featherstone R., 2008, RIGID BODY DYNAMICS
Fujimoto Y, 1998, IEEE ROBOT AUTOM MAG, V5, P33, DOI 10.1109/100.692339
Goswami A., 2009, HUMANOID ROBOTS, P167
Greenwood D. T., 2003, ADV DYNAMICS
Kagami S, 2002, AUTON ROBOT, V12, P71, DOI 10.1023/A:1013210909840
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Kajita S, 2007, IEEE INT CONF ROBOT, P3963, DOI 10.1109/ROBOT.2007.364087
Kim JY, 2006, ADV ROBOTICS, V20, P707, DOI 10.1163/156855306777361622
Kormushev P, 2011, IEEE INT C INT ROBOT, P318, DOI 10.1109/IROS.2011.6048037
Kwon O, 2009, AUTON ROBOT, V26, P47, DOI 10.1007/s10514-008-9106-7
Li Q., 1991, P IEEE RSJ INT WORKS, P1568
Ohnishi K, 1996, IEEE-ASME T MECH, V1, P56, DOI 10.1109/3516.491410
ORIN DE, 2008, P IEEE C INT ROB SYS, P653
Popovic MB, 2005, INT J ROBOT RES, V24, P1013, DOI 10.1177/0278364905058363
Pratt J., 2006, P IEEE RAS INT C HUM, P200, DOI DOI 10.1109/ICHR.2006.321385
Press W. H, 1992, NUMERICAL RECIPES C
SUGIHARA T, 2007, P IEEE C INT ROB SYS, P444
Sugihara T, 2009, IEEE T ROBOT, V25, P658, DOI 10.1109/TRO.2008.2012336
Suleiman W, 2010, IEEE T ROBOT, V26, P458, DOI 10.1109/TRO.2010.2047531
Tajima R., 2009, P IEEE INT C ROB AUT, P1571
Takenaka T., 2009, P IEEE C INT ROB SYS, P1594
Ugurlu B., 2011, Proceedings of the 2011 IEEE International Conference on
Mechatronics (ICM), P833, DOI 10.1109/ICMECH.2011.5971230
Ugurlu B., 2009, P IEEE C IND EL CONT, P4167
Ugurlu B., 2010, P IEEE C HUM ROB NAS, P468
Ugurlu B, 2010, IEEE INT CONF ROBOT, P4218, DOI 10.1109/ROBOT.2010.5509427
Ugurlu B, 2010, IEEE T IND ELECTRON, V57, P1701, DOI 10.1109/TIE.2009.2032439
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Yoshida E, 2008, LECT NOTES COMPUT SC, V5277, P210
Zhu C., 2006, P IEEE C HUM ROB GEN, P599
NR 36
TC 8
Z9 8
U1 0
U2 13
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD DEC
PY 2012
VL 28
IS 6
BP 1406
EP 1415
DI 10.1109/TRO.2012.2210478
PG 10
WC Robotics
SC Robotics
GA 051DE
UT WOS:000312104400016
DA 2018-01-22
ER

PT J
AU Hettiarachchi, DS
Iba, H
AF Hettiarachchi, Dhammika Suresh
Iba, Hitoshi
TI An Evolutionary Computational Approach to Humanoid Motion Planning
Regular Paper
SO INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
LA English
DT Article
DE Genetic Algorithm; Asymmetric Motion; Humanoid
AB The theme of our work is centred on humanoid motion planning and balancing using
evolutionary computational techniques. Evolutionary techniques, inspired by the
Darwinian evolution of biological systems, make use of the concept of the iterative
progress of a population of solutions with the aim of finding an optimally fit
solution to a given problem. The problem we address here is that of asymmetric
motion generation for humanoids, with the aim of automatically developing a series
of motions to resemble certain predefined postures. An acceptable trajectory and
stability is of the utmost concern in our work. In developing these motions, we are
utilizing genetic algorithms coupled with heuristic knowledge of the problem
domain. Unlike other types of robots, humanoids are complex in both construction
and operation due to their myriad degrees of freedom and the difficulty of
balancing on one or more limbs. The work presented in this paper includes the
adopted methodology, experimental setup, results and an analysis of the outcome of
a series of evolutionary experiments conducted for generating the said asymmetric
motions.
C1 [Hettiarachchi, Dhammika Suresh] Univ Tokyo, Grad Sch Engn, Dept Elect Engn &
Informat Syst, Tokyo 1138654, Japan.
[Iba, Hitoshi] Univ Tokyo, Grad Sch Informat Sci & Technol, Dept Informat &
Commun Engn, Tokyo 1138654, Japan.
RP Hettiarachchi, DS (reprint author), Univ Tokyo, Grad Sch Engn, Dept Elect Engn &
Informat Syst, Tokyo 1138654, Japan.
EM suresh@iba.t.u-tokyo.ac.jp
CR AYDEMIR D, 2006, P PAR PROBL SOLV NAT, V4193, P651
Cominoli P., 2005, DEV PHYS SIMULATION
Fujitsu Automation Co. Ltd, HOAP 2 INSTR MAN
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Langdon WB, 2001, LECT NOTES COMPUT SC, V2038, P313
Miller O.H., 2004, ESSENTIAL YOGA ILLUS
Nishiwaki K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2684, DOI 10.1109/IRDS.2002.1041675
Raffi K., 2005, P 4 INT C MACH LEARN, P5666
Raffi K., 2006, P IEEE INT C FUZZ SY, P3295
Sadati N, 2002, PROCEEDINGS OF THE 2002 IEEE INTERNATIONAL CONFERENCE ON FUZZY
SYSTEMS, VOL 1 & 2, P1144, DOI 10.1109/FUZZ.2002.1006665
Ude A., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International
Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065),
P2223, DOI 10.1109/ROBOT.2000.846358
Watanabe A., 2009, P 5 INT WORKSH COMP, P278
Yanase T., 2008, FRONTIERS EVOLUTIONA, P567
ZAIER R, 2002, P 20 ANN C ROB SOC J
Zaier R., 2004, P AS S MECH
NR 15
TC 1
Z9 1
U1 0
U2 9
PU SAGE PUBLICATIONS INC
PI THOUSAND OAKS
PA 2455 TELLER RD, THOUSAND OAKS, CA 91320 USA
SN 1729-8814
J9 INT J ADV ROBOT SYST
JI Int. J. Adv. Robot. Syst.
PD NOV 5
PY 2012
VL 9
AR 167
DI 10.5772/51905
PG 11
WC Robotics
SC Robotics
GA 062EJ
UT WOS:000312901600001
OA gold
DA 2018-01-22
ER

PT J
AU Forte, D
Gams, A
Morimoto, J
Ude, A
AF Forte, Denis
Gams, Andrej
Morimoto, Jun
Ude, Ales
TI On-line motion synthesis and adaptation using a trajectory database
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Robot learning; Programming by demonstration; Real-time generalization
of movements; Statistical methods; Gaussian process regression; Locally
weighted regression; Dynamic movement primitives; Kinesthetic guiding;
Example movements; Radial basis functions
ID HUMANOID ROBOTS; MODEL; REGRESSION; MOVEMENTS; DISCRETE; DYNAMICS;
VISION; TASKS
AB Autonomous robots cannot be programmed in advance for all possible situations.
Instead, they should be able to generalize the previously acquired knowledge to
operate in new situations as they arise. A possible solution to the problem of
generalization is to apply statistical methods that can generate useful robot
responses in situations for which the robot has not been specifically instructed
how to respond. In this paper we propose a methodology for the statistical
generalization of the available sensorimotor knowledge in real-time. Example
trajectories are generalized by applying Gaussian process regression, using the
parameters describing a task as query points into the trajectory database. We show
on real-world tasks that the proposed methodology can be integrated into a sensory
feedback loop, where the generalization algorithm is applied in real-time to adapt
robot motion to the perceived changes of the external world. (C) 2012 Elsevier B.V.
All rights reserved.
C1 [Forte, Denis; Gams, Andrej; Ude, Ales] Jozef Stefan Inst, Dept Automat
Biocybernet & Robot, Ljubljana 1000, Slovenia.
[Morimoto, Jun; Ude, Ales] ATR Computat Neurosci Labs, Dept Brain Robot
Interface, Seika, Kyoto 6190288, Japan.
RP Forte, D (reprint author), Jozef Stefan Inst, Dept Automat Biocybernet & Robot,
Jamova Cesta 39, Ljubljana 1000, Slovenia.
EM denis.forte@ijs.si
OI Ude, Ales/0000-0003-3677-3972; Gams, Andrej/0000-0002-9803-3593
FU European Union [FP7-ICT-2009-6-270273]; Slovenian Research Agency
[J2-2348]; SBRPS; MEXT; [23120004]
FX This work was supported in part by the European Union Cognitive Systems
Project Xperience under Grant FP7-ICT-2009-6-270273, Slovenian Research
Agency grant J2-2348, "Brain Machine Interface Development", SBRPS,
MEXT, and Grant-in-Aid for Scientific Research on Innovative Areas:
Prediction and Decision Making 23120004.
CR Atkeson CG, 1997, ARTIF INTELL REV, V11, P11, DOI 10.1023/A:1006559212014
Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
Bentivegna DC, 2004, ROBOT AUTON SYST, V47, P163, DOI
10.1016/j.robot.2004.03.010
Calinon S, 2010, IEEE ROBOT AUTOM MAG, V17, P44, DOI 10.1109/MRA.2010.936947
Dillmann R, 2004, ROBOT AUTON SYST, V47, P109, DOI 10.1016/j.robot.2004.03.005
FLASH T, 1985, J NEUROSCI, V5, P1688
Gribovskaya E, 2011, INT J ROBOT RES, V30, P80, DOI 10.1177/0278364910376251
Hersch M, 2008, IEEE T ROBOT, V24, P1463, DOI 10.1109/TRO.2008.2006703
Ijspeert AJ, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P958, DOI 10.1109/IRDS.2002.1041514
Ijspeert AJ, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS I-IV, PROCEEDINGS, P1398, DOI 10.1109/ROBOT.2002.1014739
Kruger V, 2010, IEEE ROBOT AUTOM MAG, V17, P30, DOI 10.1109/MRA.2010.936961
Kulic D, 2009, IEEE T ROBOT, V25, P1158, DOI 10.1109/TRO.2009.2026508
Liu CG, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, P3031, DOI 10.1109/IROS.2009.5354018
Nguyen-Tuong D, 2009, ADV ROBOTICS, V23, P2015, DOI
10.1163/016918609X12529286896877
Pastor P, 2009, P IEEE INT C ROB AUT, P763
Poggio T, 2004, NATURE, V431, P768, DOI 10.1038/nature03014
Rasmussen CE, 2005, ADAPT COMPUT MACH LE, P1
Schaal S, 1998, NEURAL COMPUT, V10, P2047, DOI 10.1162/089976698300016963
Schaal S, 2007, PROG BRAIN RES, V165, P425, DOI 10.1016/S0079-6123(06)65027-9
SCHMIDT RA, 1975, PSYCHOL REV, V82, P225, DOI 10.1037/h0076770
Stasse O, 2008, INT J HUM ROBOT, V5, P287, DOI 10.1142/S021984360800142X
Ude A, 2004, ROBOT AUTON SYST, V47, P93, DOI 10.1016/j.robot.2004.03.004
Ude A., 2009, P 14 INT C ADV ROB M
Ude A, 2010, IEEE T ROBOT, V26, P800, DOI 10.1109/TRO.2010.2065430
NR 24
TC 32
Z9 33
U1 0
U2 7
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD OCT
PY 2012
VL 60
IS 10
BP 1327
EP 1339
DI 10.1016/j.robot.2012.05.004
PG 13
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 015FW
UT WOS:000309432800010
DA 2018-01-22
ER

PT J
AU Cho, C
Lee, W
Lee, J
Kang, S
AF Cho, Changhyun
Lee, Woosub
Lee, Jinyi
Kang, Sungchul
TI A 2-dof gravity compensator with bevel gears
SO JOURNAL OF MECHANICAL SCIENCE AND TECHNOLOGY
LA English
DT Article
DE Bevel gears; Gravity compensation; Manipulator; Static balancing
ID MECHANISM; DESIGN
AB This paper presents a 2-dof gravity compensator used for roll-pitch rotations,
which are often applied to the shoulder joint of a service or humanoid robot. The
2-dof gravity compensator is comprised of two 1-dof gravity compensators and a
bevel differential. The rollpitch rotations are decoupled into two rotations on the
moving link by the bevel differential; the two 1-dof gravity compensators are
applied to the two rotations. The spring coefficients are determined through energy
and torque analyses in order to achieve complete static balancing. The experiment
results indicate that the proposed gravity compensator effectively counterbalances
the gravitational torques and can also be operated in the hemispherical work space.
C1 [Lee, Woosub; Kang, Sungchul] Korea Inst Sci & Technol, Ctr Bion, Seoul 136791,
South Korea.
[Cho, Changhyun; Lee, Jinyi] Chosun Univ, Dept Control Instrument & Robot,
Kwangju 501759, South Korea.
[Lee, Woosub] Tokyo Inst Technol, Dept Mech & Aerosp Engn, Tokyo 1528552, Japan.
RP Kang, S (reprint author), Korea Inst Sci & Technol, Ctr Bion, Seoul 136791,
South Korea.
EM kasch804@gmail.com
OI Lee, Jinyi/0000-0001-9470-2000
FU MKE (The Ministry of Knowledge Economy), Korea, under the ITRC
(Information Technology Research Center) [NIPA-2012-H0301-12-2008]
FX This research was supported by the MKE (The Ministry of Knowledge
Economy), Korea, under the ITRC (Information Technology Research Center)
support program supervised by the NIPA (National IT Industry Promotion
Agency)(NIPA-2012-H0301-12-2008).
CR Agrawal SK, 2004, MECH MACH THEORY, V39, P1331, DOI
10.1016/j.mechmachtheory.2004.05.019
Cho C. H., 2011, P 2011 IEEE RSJ INT, P1857
Cho C., 2010, P 2010 IEEE ASME INT, P1269
DIKEN H, 1995, MECH MACH THEORY, V30, P495, DOI 10.1016/0094-114X(94)00060-X
Gosselin CM, 1998, IEEE INT CONF ROBOT, P2287, DOI 10.1109/ROBOT.1998.680664
Koser K, 2009, MECH RES COMMUN, V36, P523, DOI 10.1016/j.mechrescom.2008.12.005
Lu Q, 2011, RECENT PATENTS ENG, V5, P32
Morita T, 2003, IEEE ASME INT C ADV, P163, DOI 10.1109/AIM.2003.1225089
NATHAN RH, 1985, J MECH TRANSM-T ASME, V107, P508, DOI 10.1115/1.3260755
Ono Y., 2004, J ROB MECHATRON, V16, P563
Russo A, 2005, MECH MACH THEORY, V40, P191, DOI
10.1016/j.mechmachtheory.2004.06.011
ULRICH N, 1991, 1991 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS 1-3, P1536, DOI 10.1109/ROBOT.1991.131834
WALSH GJ, 1991, MECH MACH THEORY, V26, P155, DOI 10.1016/0094-114X(91)90080-N
Wyrobek KA, 2008, IEEE INT CONF ROBOT, P2165, DOI 10.1109/ROBOT.2008.4543527
NR 14
TC 13
Z9 13
U1 3
U2 9
PU KOREAN SOC MECHANICAL ENGINEERS
PI SEOUL
PA KSTC NEW BLD. 7TH FLOOR, 635-4 YEOKSAM-DONG KANGNAM-KU, SEOUL 135-703,
SOUTH KOREA
SN 1738-494X
EI 1976-3824
J9 J MECH SCI TECHNOL
JI J. Mech. Sci. Technol.
PD SEP
PY 2012
VL 26
IS 9
BP 2913
EP 2919
DI 10.1007/s12206-012-0709-8
PG 7
WC Engineering, Mechanical
SC Engineering
GA 006LA
UT WOS:000308819500033
DA 2018-01-22
ER

PT J
AU Nishiwaki, K
Chestnutt, J
Kagami, S
AF Nishiwaki, Koichi
Chestnutt, Joel
Kagami, Satoshi
TI Autonomous navigation of a humanoid robot over unknown rough terrain
using a laser range sensor
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article; Proceedings Paper
CT 15th International Symposium on Robotics Research (ISSR)
CY AUG 28-31, 2011
CL Flagstaff, AZ
DE Humanoid robots; human-centered and life-like robots; legged robots;
design and control; range sensing; sensing and perception; computer
vision; mapping; mobile and distributed robotics; SLAM
ID WALKING PATTERN GENERATION; MODEL
AB The present paper describes the integration of laser-based perception, footstep
planning, and walking control of a humanoid robot for navigation over previously
unknown rough terrain. A perception system that obtains the shape of the
surrounding environment to an accuracy of a few centimeters is realized based on
input obtained using a scanning laser range sensor. A footstep planner decides the
sequence of stepping positions using the obtained terrain shape. A walking
controller that can cope with a few centimeters of error in terrain shape
measurement is achieved by combining the generation of a 40-ms cycle online walking
pattern and a ground reaction force controller with sensor feedback. An operational
interface was developed to send commands to the robot. A mixed-reality display was
adopted to realize an intuitive interface. The navigation system was implemented on
the HRP-2, a full-size humanoid robot. The performance of the proposed system for
navigation over unknown rough terrain and the accuracy of the terrain shape
measurement were investigated through several experiments.
C1 [Nishiwaki, Koichi] Natl Inst Adv Ind Sci & Technol, Digital Human Res Ctr, Koto
Ku, Tokyo 1350064, Japan.
[Nishiwaki, Koichi; Kagami, Satoshi] Japan Sci & Technol Agcy, CREST, Tokyo,
Japan.
[Chestnutt, Joel] Boston Dynam Inc, Boston, MA USA.
RP Nishiwaki, K (reprint author), Natl Inst Adv Ind Sci & Technol, Digital Human
Res Ctr, Koto Ku, 2-3-26 Aomi, Tokyo 1350064, Japan.
EM k.nishiwaki@aist.go.jp
RI Kagami, Satoshi/A-6841-2013
CR Buschmann T., 2010, P IEEE RAS INT C HUM, P237
Buschmann T, 2007, IEEE-RAS INT C HUMAN, P1, DOI 10.1109/ICHR.2007.4813841
Chestnutt J, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P3519, DOI 10.1109/IROS.2009.5354571
Chestnutt J, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P3543, DOI 10.1109/IROS.2009.5354575
Chestnutt J, 2007, IEEE-RAS INT C HUMAN, P196, DOI 10.1109/ICHR.2007.4813868
Cupec R., 2005, Automatika, V46, P49
Dimitrov D, 2008, IEEE INT CONF ROBOT, P2685, DOI 10.1109/ROBOT.2008.4543617
Gutmann JS, 2005, 19TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
(IJCAI-05), P1232
Harada K., 2004, P INT C HUM ROB
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kajita S., 2003, P IEEE RSJ INT C INT, V2, P1644
Kaneshima Y., 2001, P 19 ANN C ROB SOC J, P987
Kuffner J, 2003, IEEE INT CONF ROBOT, P932
Loffler K, 2003, IEEE INT CONF ROBOT, P484
Michel P., 2007, P IEEE RSJ INT C INT, P463
Nishiwaki K, 2003, SPRINGER TRAC ADV RO, V5, P85
Nishiwaki K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2684, DOI 10.1109/IRDS.2002.1041675
Nishiwaki K, 2001, IEEE INT CONF ROBOT, P4110, DOI 10.1109/ROBOT.2001.933260
Nishiwaki K., 2009, P IEEE RAS INT C HUM, P535
Park I.-W., 2006, P IEEE INT C ROB AUT, P2667
Setiawan S, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P361, DOI 10.1109/ROBOT.1999.770005
Sugihara T., 2005, P 2005 IEEE INT C RO, P306
Yokoi K., 2001, P IEEE RAS INT C HUM, P259
NR 23
TC 33
Z9 33
U1 0
U2 11
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD SEP
PY 2012
VL 31
IS 11
SI SI
BP 1251
EP 1262
DI 10.1177/0278364912455720
PG 12
WC Robotics
SC Robotics
GA 003ZN
UT WOS:000308650000004
DA 2018-01-22
ER

PT J
AU Sakai, M
Kanoh, M
Nakamura, T
AF Sakai, Masashi
Kanoh, Masayoshi
Nakamura, Tsuyoshi
TI Evolutionary Multivalued Decision Diagrams for Obtaining Motion
Representation of Humanoid Robots
SO IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND
REVIEWS
LA English
DT Article
DE Evolutionary computation; genetic programming; motion representation;
multivalued decision diagram (MDD); robot control
ID LOCOMOTION; ALGORITHMS; DESIGN
AB In this paper, we propose a method, using multivalued decision diagrams (MDDs),
to obtain motion representation of humanoid robots. Kanoh et al. have proposed a
method, which uses multiterminal binary decision diagrams (MTBDDs), to acquire
robot controller. However, nonterminal vertices of MTBDDs can only treat values of
0 or 1; multiple variables are needed to represent a single joint angle. This
increases the number of the non-terminal vertices and MTBDDs that represent that
the controller becomes complex. Therefore, we consider using the MDD, in which its
nonterminal vertices can take on multiple output values. To obtain humanoid robot
motion representation, we propose evolutionary MDDs and show experimental results
comparing evolutionary MDDs and evolutionary MTBDDs through simulations of
acquisition of robot motion in this paper. Moreover, we verify whether the
evolution of MDD using a memetic algorithm is effective.
C1 [Sakai, Masashi; Nakamura, Tsuyoshi] Nagoya Inst Technol, Grad Sch Engn, Nagoya,
Aichi 4668555, Japan.
[Kanoh, Masayoshi] Chukyo Univ, Sch Informat Sci & Technol, Dept Mech & Informat
Technol, Toyota 4700393, Japan.
RP Sakai, M (reprint author), Nagoya Inst Technol, Grad Sch Engn, Nagoya, Aichi
4668555, Japan.
EM sakai@ai.nitech.ac.jp; mkanoh@sist.chukyo-u.ac.jp; tnaka@nitech.ac.jp
FU Ministry of Education, Culture, Sports, Science, and Technology
[20680014]
FX This study was supported in part by the Grant-in-Aid for Young
Scientists (A) of the Ministry of Education, Culture, Sports, Science,
and Technology under Grant 20680014. This paper was recommended by
Associate Editor M.-H. Lim.
CR AKERS SB, 1978, IEEE T COMPUT, V27, P509
Asada M., 2003, P ROBOCUP INT S, P344
Bollig B, 1996, IEEE T COMPUT, V45, P993, DOI 10.1109/12.537122
BRYANT RE, 1986, IEEE T COMPUT, V35, P677
Caponio A, 2007, IEEE T SYST MAN CY B, V37, P28, DOI 10.1109/TSMCB.2006.883271
Chen X. S., 2011, IEEE T EVOL IN PRESS
Forlizzi J, 2006, P 1 ACM SIGCHI SIGAR, P258, DOI DOI 10.1145/1121241.1121286
FRIEDMAN SJ, 1990, IEEE T COMPUT, V39, P710, DOI 10.1109/12.53586
Fujita M, 2004, P IEEE, V92, P1804, DOI 10.1109/JPROC.2004.835364
Fujita M., 1988, P INT C COMPUTER AID, P2
FUJITA Y, 2002, J ROBOTICS MECHATRON, V14, P60
Ishibuchi H, 2004, FUZZY SET SYST, V141, P59, DOI 10.1016/S0165-0114(03)00114-3
KAMIO S, 2003, P GEN EV COMP C GECC, P470
Kanoh M., 2005, KANSEI ENG INT, V5, P35
Kanoh M., 2008, J JAPAN SOC FUZZY TH, V20, P909
Knuth D. E., 2009, ART COMPUTER PROGRAM, V4
Koza JR, 1992, GENETIC PROGRAMMING
Kuwayama K, 2004, PROCEEDINGS OF THE 2004 INTERNATIONAL SYMPOSIUM ON MICRO-
NANOMECHATRONICS AND HUMAN SCIENCE, P157
Kuwayama K., 2004, P INT C INF CONTR AU, V2, P335
MALIK S, 1988, P INT C COMP AID DES, P6
Minato S., 1993, Joho Shori, V34, P593
Morimoto J, 2001, ROBOT AUTON SYST, V36, P37, DOI 10.1016/S0921-8890(01)00113-0
Morimoto J, 2007, IEEE ROBOT AUTOM MAG, V14, P41, DOI 10.1109/MRA.2007.380654
Moriwaki K, 1996, RO-MAN '96 - 5TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND
HUMAN COMMUNICATION, PROCEEDINGS, P96, DOI 10.1109/ROMAN.1996.568769
Moriwaki K., 1997, SOFT COMPUTING ENG D, P153
Moscato P., 1989, 826 CALTECH CONC COM
Nakamura Y, 2007, NEURAL NETWORKS, V20, P723, DOI 10.1016/j.neunet.2007.01.002
Ogihara N, 2001, BIOL CYBERN, V84, P1, DOI 10.1007/PL00007977
Ong YS, 2006, IEEE T EVOLUT COMPUT, V10, P392, DOI 10.1109/TEVC.2005.859464
Ong YS, 2010, IEEE COMPUT INTELL M, V5, P24, DOI 10.1109/MCI.2010.936309
PANDA S, 1995, P INT C COMP AID DES, P74
Prasad P. W. C., 2006, INT J COMPUT SCI, V1, P1
Rudell R., 1993, P INT C COMP AID DES, P42
Sakai M., 2010, P IEEE WORLD C COMP, P550
Saulnier P., 2009, ACM IEEE INT C HUM R, P263
Somenzi F., 2001, International Journal on Software Tools for Technology
Transfer, V3, P171, DOI 10.1007/s100090100042
Srinivasan A., 1990, P INT C COMP AID DES, P92
Sung J. Y., 2008, P 3 ACM IEEE INT C H, P129, DOI [10.1145/1349822.1349840, DOI
10.1145/1349822.1349840]
Wada K, 2007, IEEE T ROBOT, V23, P972, DOI 10.1109/TRO.2007.906261
Yanagiya M., 1993, Joho Shori, V34, P617
Yanase T., 2008, FRONT EVOL ROBOT
NR 41
TC 1
Z9 1
U1 0
U2 2
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1094-6977
EI 1558-2442
J9 IEEE T SYST MAN CY C
JI IEEE Trans. Syst. Man Cybern. Part C-Appl. Rev.
PD SEP
PY 2012
VL 42
IS 5
SI SI
BP 653
EP 663
DI 10.1109/TSMCC.2011.2176487
PG 11
WC Computer Science, Artificial Intelligence; Computer Science,
Cybernetics; Computer Science, Interdisciplinary Applications
SC Computer Science
GA 993YJ
UT WOS:000307895800005
DA 2018-01-22
ER

PT J
AU Shimizu, T
Saegusa, R
Ikemoto, S
Ishiguro, H
Metta, G
AF Shimizu, Toshihiko
Saegusa, Ryo
Ikemoto, Shuhei
Ishiguro, Hiroshi
Metta, Giorgio
TI Self-protective whole body motion for humanoid robots based on synergy
of global reaction and local reflex
SO NEURAL NETWORKS
LA English
DT Article
DE Incremental learning; Physical interaction; Force detection; Reflex;
Humanoid robot
AB This paper describes a self-protective whole body motor controller to enable
life-long learning of humanoid robots. In order to reduce the damages on robots
caused by physical interaction such as obstacle collision, we introduce self-
protective behaviors based on the adaptive coordination of full-body global
reactions and local limb reflexes. Global reactions aim at adaptive whole-body
movements to prepare for harmful situations. The system incrementally learns a more
effective association of the states and global reactions. Local reflexes based on a
force-torque sensing function to reduce the impact load on the limbs independently
of high-level motor intention. We examined the proposed method with a robot
simulator in various conditions. We then applied the systems on a real humanoid
robot. (C) 2012 Elsevier Ltd. All rights reserved.
C1 [Shimizu, Toshihiko; Ishiguro, Hiroshi] Osaka Univ, Grad Sch Engn Sci, Dept Syst
Innovat, Toyonaka, Osaka 5608531, Japan.
[Shimizu, Toshihiko; Saegusa, Ryo; Metta, Giorgio] Italian Inst Technol, Dept
Robot Brain & Cognit Sci, I-16163 Genoa, Italy.
[Ikemoto, Shuhei] Osaka Univ, Grad Sch Informat Sci & Technol, Dept Multimedia
Engn, Suita, Osaka, Japan.
RP Shimizu, T (reprint author), Osaka Univ, Grad Sch Engn Sci, Dept Syst Innovat,
1-3 Machikaneyama, Toyonaka, Osaka 5608531, Japan.
EM shimizu.toshihiko@is.sys.es.oska-u.ac.jp; ryos@ieee.org
FU EU [215805]
FX We would like to appreciate the significant contribution made by Prof.
Susan Campbell towards proofreading. This work is partially supported by
EU FP7 project CHRIS (Cooperative Human Robot Interaction System FP7
215805).
CR Bermudez i Badia S., 2004, NEUR NETW 2004 P 200, V3, P1757
Bauer C, 2010, IEEE INT C INT ROBOT, P2572, DOI 10.1109/IROS.2010.5648900
Boone GN, 1997, AUTON ROBOT, V4, P259, DOI 10.1023/A:1008891909459
BROOKS RA, 1991, ARTIF INTELL, V47, P139, DOI 10.1016/0004-3702(91)90053-M
Folgheraiter M., 2003, P SYROCO
Fujiwara K., 2002, IEEE RSJ INT C INT R, V3, P2521
Hamker FH, 2001, NEURAL NETWORKS, V14, P551, DOI 10.1016/S0893-6080(01)00018-1
Huang Q, 2005, IEEE T ROBOT, V21, P977, DOI 10.1109/TRO.2005.851381
Ivaldi S., 2011, P 11 IEEE RAS INT C
Metta G., 2006, International Journal of Advanced Robotic Systems, V3, P43
Metta G., 2008, P 8 WORKSH PERF METR, P50, DOI DOI 10.1145/1774674.1774683
Mohammad Y., 2009, LECT NOTES ARTIF INT, P123
Nakamura Y., 1999, ROB AUT 1999 P 1999, V3, P2398
Natale L, 2002, ROBOT AUTON SYST, V39, P87, DOI 10.1016/S0921-8890(02)00174-4
Parmiggiani A., 2009, IEEE RAS INT C HUM R
Renner R., 2006, IEEE RSJ INT C INT R, P2967
Shen FR, 2006, NEURAL NETWORKS, V19, P90, DOI 10.1016/j.neunet.2005.04.006
Shen Furao, 2008, Neural Netw, V21, P1537, DOI 10.1016/j.neunet.2008.07.001
Shimizu T., 2011, INT JOINT C NEUR NET, P2860
Sudo A, 2009, IEEE T NEURAL NETWOR, V20, P964, DOI 10.1109/TNN.2009.2014374
Tikhanoff V., 2008, INT C INT ROB SYST I
Tsagarakis NG, 2007, ADV ROBOTICS, V21, P1151, DOI 10.1163/156855307781389419
Wikman T., 2002, ROBOTICS AUTOMATION
Yoshikai T., 2003, MULT FUS INT INT SYS, P9
Zafeiriou DI, 2004, PEDIATR NEUROL, V31, P1, DOI
10.1016/j.pediatrneurol.2004.01.012
NR 25
TC 4
Z9 4
U1 0
U2 6
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 0893-6080
EI 1879-2782
J9 NEURAL NETWORKS
JI Neural Netw.
PD AUG
PY 2012
VL 32
SI SI
BP 109
EP 118
DI 10.1016/j.neunet.2012.02.011
PG 10
WC Computer Science, Artificial Intelligence; Neurosciences
SC Computer Science; Neurosciences & Neurology
GA 970WI
UT WOS:000306162600012
PM 22377658
DA 2018-01-22
ER

PT J
AU Sreenivasa, M
Soueres, P
Laumond, JP
AF Sreenivasa, Manish
Soueres, Philippe
Laumond, Jean-Paul
TI Walking to Grasp: Modeling of Human Movements as Invariants and an
Application to Humanoid Robotics
SO IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND
HUMANS
LA English
DT Article
DE Anthropomorphism; humanoid robots; motion-analysis; motion-planning
ID MOTION; HEAD; LOCOMOTION; BEHAVIOR; BODY; COORDINATION; GENERATION;
MECHANISMS; REACH; PATH
AB Concurrent advancements in mechanical design and motion planning algorithms
allow state-of-the-art humanoid robots to exhibit complex and realistic behavior.
In face of this added complexity and the need for humanlike behavior, research has
begun to look toward studies in human neuroscience to better organize and guide
humanoid robot motion. In this paper, we present one such method of generating
anthropomorphic motion by building the "invariants" of human movements and applying
them as kinematic tasks. Whole-body motion of 14 healthy participants was recorded
during a walking and grasping task. The recorded data were statistically analyzed
to extract invariants which best described the observed motion. These invariants
were expressed as a set of rules that were used to synthesize the stereotypy in
human motion. We propose an algorithm that reproduces the key parameters of motion,
taking into account the knowledge from human movement and the limitations of the
target anthropomorph. The results are then generalized such that we can generate
motion for targets which were not originally recorded. The algorithmic output is
applied in a task-based prioritized inverse kinematics solver to generate
dynamically stable and realistic anthropomorphic motion. We illustrate our results
on the humanoid HRP-2 by making it walk to and grasp objects at various positions.
Our approach complements classical optimization or motion-planning-based methods
and provides interesting perspectives toward the use of human movements for
deducing effective cost functions in optimization techniques or heuristics for
planning algorithms.
C1 [Sreenivasa, Manish] Univ Tokyo, Nakamura Lab, Dept Mechanoinformat, Tokyo
1138656, Japan.
[Soueres, Philippe; Laumond, Jean-Paul] Univ Toulouse 3, Lab Anal & Architecture
Syst, Inst Natl Sci Appl Toulouse,Ctr Natl Rech Sci, Inst Natl Polytech
Toulouse,Inst Super Aeronat &, F-31077 Toulouse, France.
RP Sreenivasa, M (reprint author), Univ Tokyo, Nakamura Lab, Dept Mechanoinformat,
Tokyo 1138656, Japan.
EM manu@laas.fr; soueres@laas.fr; jpl@laas.fr
FU French Agence Nationale de la Recherche project LOCANTHROPE; Fonds
Unique Interministeriel project ROMEO
FX Manuscript received January 11, 2011; revised June 16, 2011; accepted
September 18, 2011. Date of publication January 17, 2012; date of
current version June 13, 2012. This paper has supplementary downloadable
material available at http://ieeexplore.ieee.org provided by the
authors. This work was supported in part by the French Agence Nationale
de la Recherche project LOCANTHROPE and in part by the Fonds Unique
Interministeriel project ROMEO. A short version of this study appeared
at the 2010 IEEE/Robotics and Automation Society-Engineering, Medicine,
and Biology Society International Conference on Biomedical Robotics and
Biomechatronics, Tokyo, Japan. Research for this study was conducted at
the Laboratoire d'Analyse et d'Architecture des Systemes, Centre
National de la Recherche Scientifique. This paper was recommended by
Associate Editor A. Howard.
CR ALEXANDER RM, 1996, OPTIMA ANIMALS
Arechavaleta G, 2008, AUTON ROBOT, V25, P25, DOI 10.1007/s10514-007-9075-2
Bernstein N, 1967, COORDINATION REGULAT
Berthoz A., 2000, BRAINS SENSE MOVEMEN
Boulic R., 1990, Visual Computer, V6, P344, DOI 10.1007/BF01901021
Choset H. M., 2005, PRINCIPLES ROBOT MOT
Coffey N, 2011, HUM MOVEMENT SCI, V30, P1144, DOI 10.1016/j.humov.2010.11.005
Dariush B., 2008, P IEEE RSJ INT C INT, P191
Dariush B, 2008, IEEE INT CONF ROBOT, P2677, DOI 10.1109/ROBOT.2008.4543616
Flanders M, 1999, EXP BRAIN RES, V126, P19, DOI 10.1007/s002210050713
Flash T., 1985, NEUROSCIENCE, V5, P1688
Harada K., 2009, P IEEE RSJ INT C INT, P1071
Hicheur H, 2005, NEUROSCI LETT, V383, P87, DOI 10.1016/j.neulet.2005.03.046
Hicheur H, 2007, EUR J NEUROSCI, V26, P2376, DOI 10.1111/j.1460-
9568.2007.05836.x
HOFF B, 1993, J MOTOR BEHAV, V25, P175, DOI 10.1080/00222895.1993.9942048
Imai T, 2001, EXP BRAIN RES, V136, P1, DOI 10.1007/s002210000533
Johansson RS, 2001, J NEUROSCI, V21, P6917
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kanoun O., 2009, THESIS U TOULOUSE TO
Kanoun O, 2011, INT J ROBOT RES, V30, P476, DOI 10.1177/0278364910371238
Khatib O., 2004, INT J HUM ROBOT, V1, P29, DOI [10.1142/S0219843604000058, DOI
10.1142/S0219843604000058]
Kuffner JJ, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P500, DOI
10.1109/IROS.2001.973406
Kulic D., 2010, P IEEE RSJ INT C INT, P4649
Kulic D, 2009, IEEE T ROBOT, V25, P1158, DOI 10.1109/TRO.2009.2026508
Lamiraux F, 2004, IEEE T ROBOTIC AUTOM, V20, P967, DOI 10.1109/TRO.2004.829459
Latombe J.-C., 1991, ROBOT MOTION PLANNIN
Laumond J. P., 1998, ROBOT MOTION PLANNIN
LaValle SM, 2001, INT J ROBOT RES, V20, P378, DOI 10.1177/02783640122067453
LaValle S. M., 2006, PLANNING ALGORITHMS
Lee JH, 2002, ACM T GRAPHIC, V21, P491
LIEGEOIS A, 1977, IEEE T SYST MAN CYB, V7, P868
Mansard N, 2007, IEEE INT CONF ROBOT, P3041, DOI 10.1109/ROBOT.2007.363934
[Anonymous], 1979, FIN IMP RESP FIR FIL
McGuigan F. J., 1996, EXPT PSYCHOL METHODS
Mombaur K, 2010, AUTON ROBOT, V28, P369, DOI 10.1007/s10514-009-9170-7
Montecillo-Puente FJ, 2010, ICINCO 2010: PROCEEDINGS OF THE 7TH INTERNATIONAL
CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, VOL 2, P22
Morisawa M., 2006, P IEEE RAS INT C HUM, P581
Nakamura Y, 1991, ADV ROBOTICS REDUNDA
Patla AE, 1999, EXP BRAIN RES, V129, P629, DOI 10.1007/s002210050932
Pettre J, 2006, COMPUT ANIMAT VIRT W, V17, P109, DOI 10.1002/cav.76
Prevost P, 2003, NEUROSCI LETT, V339, P243, DOI 10.1016/S0304(02)01390-3
Saab L., 2011, P IEEE INT C ROB AUT, P1091
Samson C., 1991, ROBOT CONTROL TASK F
Sekiya N, 1996, J HUM MOVEMENT STUD, V30, P241
Siciliano B., 1991, P IEEE INT C ADV ROB, P1211
Soueres P., 2003, P 7 S ROB CONTR WROC, P423
Sreenivasa MN, 2008, EXP BRAIN RES, V191, P313, DOI 10.1007/s00221-008-1525-3
Sreenivasa M. N., 2009, P IEEE RSJ INT C INT, P4451
Stasse O., 2009, P IEEE RAS INT C HUM, P284
Takano W, 2007, IEEE INT CONF ROBOT, P3092, DOI 10.1109/ROBOT.2007.363942
VUKOBRATOVIC M, 1990, SCI FUNDAMENTALS ROB
Yoshida E., 2007, P IEEE RAS INT C HUM, P89
NR 52
TC 13
Z9 14
U1 0
U2 7
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1083-4427
EI 1558-2426
J9 IEEE T SYST MAN CY A
JI IEEE Trans. Syst. Man Cybern. Paart A-Syst. Hum.
PD JUL
PY 2012
VL 42
IS 4
BP 880
EP 893
DI 10.1109/TSMCA.2011.2178830
PG 14
WC Computer Science, Cybernetics; Computer Science, Theory & Methods
SC Computer Science
GA 962XU
UT WOS:000305584400008
DA 2018-01-22
ER
PT J
AU Hayet, JB
Esteves, C
Arechavaleta, G
Stasse, O
Yoshida, E
AF Hayet, Jean-Bernard
Esteves, Claudia
Arechavaleta, Gustavo
Stasse, Olivier
Yoshida, Eiichi
TI HUMANOID LOCOMOTION PLANNING FOR VISUALLY GUIDED TASKS
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Nonholonomic motion planning; landmark-based navigation; humanoid motion
generation
ID WALKING; MOTION
AB In this work, we propose a landmark-based navigation approach that integrates
(1) high-level motion planning capabilities that take into account the landmarks
position and visibility and (2) a stack of feasible visual servoing tasks based on
footprints to follow. The path planner computes a collision-free path that
considers sensory, geometric, and kinematic constraints that are specific to
humanoid robots. Based on recent results in movement neuroscience that suggest that
most humans exhibit nonholonomic constraints when walking in open spaces, the
humanoid steering behavior is modeled as a differential-drive wheeled robot (DDR).
The obtained paths are made of geometric primitives that are the shortest in
distance in free spaces. The footprints around the path and the positions of the
landmarks to which the gaze must be directed are used within a stack-of-tasks (SoT)
framework to compute the whole-body motion of the humanoid. We provide some
experiments that verify the effectiveness of the proposed strategy on the HRP-2
platform.
C1 [Hayet, Jean-Bernard] CIMAT, Ctr Invest Matemat, Guanajuato 36240, Mexico.
[Esteves, Claudia; Arechavaleta, Gustavo] Univ Guanajuato, Dept Matemat,
Guanajuato 36240, Mexico.
[Arechavaleta, Gustavo] IPN, Ctr Invest & Estudios Avanzados, Robot & Adv Mfg
Grp, Saltillo, Coah, Mexico.
[Stasse, Olivier] LAAS CNRS, Grp Gepetto, Lab Anal & Architecture Syst, F-31077
Toulouse 4, France.
[Yoshida, Eiichi] CNRS AIST, JRL, Intelligent Syst Res Inst, UMI CRT 3218,
Tsukuba, Ibaraki 3058568, Japan.
RP Hayet, JB (reprint author), CIMAT, Ctr Invest Matemat, Jalisco S-N Col
Valenciana, Guanajuato 36240, Mexico.
EM jbhayet@cimat.mx; cesteves@cimat.mx; garechav@cinvestav.edu.mx;
ostasse@laas.fr; e.yoshida@aist.go.jp
RI Yoshida, Eiichi/M-3756-2016
OI Yoshida, Eiichi/0000-0002-3077-6964; Hayet,
Jean-Bernard/0000-0002-6662-2553
CR Arechavaleta G, 2008, AUTON ROBOT, V25, P25, DOI 10.1007/s10514-007-9075-2
Bhattacharya S, 2007, IEEE T ROBOT, V23, P47, DOI 10.1109/TRO.2006.886841
Chestnutt J, 2004, IEEE-RAS INT C HUMAN, P422
Chestnutt J., 2003, P IEEE RAS INT C HUM
Escande A, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, P435, DOI 10.1109/IROS.2009.5354371
Hauser K., 2005, P IEEE RAS INT C HUM, P7
Hayet J. B., 2009, P 9 IEEE RAS INT C H, P196
Hayet J.-B., 2010, ROBOTICS AUTON UNPUB
Hayet J. B., 2008, P INT WORKSH ALG FDN, P333
Hicheur H, 2007, EUR J NEUROSCI, V26, P2376, DOI 10.1111/j.1460-
9568.2007.05836.x
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
Kanoun O., 2009, P IEEE INT C ROB AUT, P2939
LAUMOND JP, 1994, IEEE T ROBOTIC AUTOM, V10, P577, DOI 10.1109/70.326564
Mansard N, 2007, IEEE INT CONF ROBOT, P3041, DOI 10.1109/ROBOT.2007.363934
Mansard N, 2007, IEEE T ROBOT, V23, P60, DOI 10.1109/TRO.2006.889487
Mansard N, 2009, IEEE T ROBOT, V25, P670, DOI 10.1109/TRO.2009.2020345
Marchand E, 2005, IEEE ROBOT AUTOM MAG, V12, P40, DOI 10.1109/MRA.2005.1577023
MIRTICH B, 1992, 1992 IEEE INTERNATIONAL CONF ON ROBOTICS AND AUTOMATION :
PROCEEDINGS, VOLS 1-3, P2533, DOI 10.1109/ROBOT.1992.220060
Nakamura Y., 1990, ADV ROBOTICS REDUNDA
Nishiwaki K, 2007, PHILOS T R SOC A, V365, P79, DOI 10.1098/rsta.2006.1921
Salaris P, 2010, IEEE T ROBOT, V26, P269, DOI 10.1109/TRO.2009.2039379
Siciliano B., 1991, P IEEE INT C ADV ROB, P1211
Stasse O, 2008, INT J HUM ROBOT, V5, P287, DOI 10.1142/S021984360800142X
Sugihara T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1404, DOI 10.1109/ROBOT.2002.1014740
Truong TVA, 2010, P IEEE RAS-EMBS INT, P632, DOI 10.1109/BIOROB.2010.5626817
YANG Y, 2006, P ROB SCI SYST
Yoshida E, 2008, IEEE T ROBOT, V24, P1186, DOI 10.1109/TRO.2008.2002312
Yoshida E, 2010, AUTON ROBOT, V28, P77, DOI 10.1007/s10514-009-9143-x
Yoshida E, 2009, COMPUT ANIMAT VIRT W, V20, P511, DOI 10.1002/cav.280
NR 30
TC 4
Z9 4
U1 0
U2 6
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD JUN
PY 2012
VL 9
IS 2
AR 1250009
DI 10.1142/S0219843612500090
PG 26
WC Robotics
SC Robotics
GA 987XW
UT WOS:000307452200002
DA 2018-01-22
ER

PT J
AU Tsujita, T
Konno, A
Komizunai, S
Nomura, Y
Myojin, T
Ayaz, Y
Uchiyama, M
AF Tsujita, Teppei
Konno, Atsushi
Komizunai, Shunsuke
Nomura, Yuki
Myojin, Tomoya
Ayaz, Yasar
Uchiyama, Masaru
TI HUMANOID ROBOT MOTION GENERATION SCHEME FOR TASKS UTILIZING IMPULSIVE
FORCE
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Impact motion; humanoid robot; impulsive force
AB In order to exert a large force on an environment, it is effective to apply
impulsive force. We describe the motions in which tasks are performed by applying
impulsive force as "impact motions." This paper proposes a way to generate impact
motions for humanoid robots to exert a large force and the feedback control method
for driving a nail robustly. The impact motion is optimized based on a three
dimensional model using sequential quadratic programming (SQP). In this research, a
nailing task is taken as an example of impact motion. A dominant parameter for
driving a nail strongly is revealed and motions which maximize the parameter are
generated considering the robot's postural stability. In order to evaluate the
proposed scheme, a life-sized humanoid robot drives nails into a plate made of
chemical wood. The optimized motion is compared with a motion designed
heuristically by a human. Average driving depth is clearly increased by the
proposed method.
C1 [Tsujita, Teppei; Konno, Atsushi; Komizunai, Shunsuke; Nomura, Yuki; Myojin,
Tomoya; Ayaz, Yasar; Uchiyama, Masaru] Tohoku Univ, Grad Sch Engn, Dept Aerosp
Engn, Sendai, Miyagi 9808579, Japan.
RP Tsujita, T (reprint author), Tohoku Univ, Grad Sch Engn, Dept Aerosp Engn, 6-6-
01 Aoba Yama, Sendai, Miyagi 9808579, Japan.
EM tsujita@space.mech.tohoku.ac.jp; konno@space.mech.tohoku.ac.jp;
uchiyama@space.mech.tohoku.ac.jp
FU NEDO Industrial Technology Research Grant Program [05A30703a]; JSPS
[20.6273]
FX This research was supported by NEDO Industrial Technology Research Grant
Program (Project ID: 05A30703a) and JSPS Grant-in-Aid for JSPS Fellows
(20.6273).
CR Arisumi H., 2008, SICE SYST INT DIV AN, P1067
ASADA H, 1987, IEEE T ROBOTIC AUTOM, P751
Ferretti G, 1998, IEEE CONTR SYST MAG, V18, P65, DOI 10.1109/37.710879
Hirukawa H, 2004, ROBOT AUTON SYST, V48, P165, DOI 10.1016/j.robot.2004.07.007
ISOZUMI T, 2004, J ROBOTICS SOC JAPAN, V22, P1004
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
KHATIB O, 1986, IEEE T ROBOTIC AUTOM, P1381
Matsumoto T., 2006, INT C INT ROB SYST 2, P5919
Potkonjak V., 2005, INT J HUM ROBOT, V2, P21
Potkonjak V, 2006, INT J HUM ROBOT, V3, P21, DOI 10.1142/S0219843606000679
Takase K., 1990, Journal of the Society of Instrument and Control Engineers,
V29, P213
Tsujio S., 1999, SME C ROB MECH ROBOM
Tsujita Teppei, 2010, Cutting Edge Robotics 2010, P175
Uchiyama M., 2005, 36 INT S ROB ISR TOK
UCHIYAMA M, 1975, BIOMECHANISM, V3, P172
Vukobratovic M, 1990, BIPED LOCOMOTION DYN
WALKER ID, 1994, IEEE T ROBOTIC AUTOM, V10, P670, DOI 10.1109/70.326571
YOSHIDA K, 1993, IROS 93 : PROCEEDINGS OF THE 1993 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOL 1-3, P2064, DOI
10.1109/IROS.1993.583915
Yoshida K., 1991, IEEE INT C ROB AUT, P2516
YOSHIDA K, 1992, IEEE INT C ROB AUT I, P899
ZHENG Y, 1985, J ROBOTIC SYST, V2, P289, DOI 10.1002/rob.4620020307
NR 21
TC 0
Z9 0
U1 0
U2 1
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD JUN
PY 2012
VL 9
IS 2
AR 1250008
DI 10.1142/S0219843612500089
PG 23
WC Robotics
SC Robotics
GA 987XW
UT WOS:000307452200001
DA 2018-01-22
ER

PT J
AU Nishide, S
Tani, J
Takahashi, T
Okuno, HG
Ogata, T
AF Nishide, Shun
Tani, Jun
Takahashi, Toru
Okuno, Hiroshi G.
Ogata, Tetsuya
TI Tool-Body Assimilation of Humanoid Robot Using a Neurodynamical System
SO IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT
LA English
DT Article
DE Active sensing; humanoid robots; recurrent neural network; tool-body
assimilation
ID REPRESENTATION; AFFORDANCES; DYNAMICS
AB Researches in the brain science field have uncovered the human capability to use
tools as if they are part of the human bodies (known as tool-body assimilation)
through trial and experience. This paper presents a method to apply a robot's
active sensing experience to create the tool-body assimilation model. The model is
composed of a feature extraction module, dynamics learning module, and a tool-body
assimilation module. Self-organizing map (SOM) is used for the feature extraction
module to extract object features from raw images. Multiple time-scales recurrent
neural network (MTRNN) is used as the dynamics learning module. Parametric bias
(PB) nodes are attached to the weights of MTRNN as second-order network to modulate
the behavior of MTRNN based on the properties of the tool. The generalization
capability of neural networks provide the model the ability to deal with unknown
tools. Experiments were conducted with the humanoid robot HRP-2 using no tool, I-
shaped, T-shaped, and L-shaped tools. The distribution of PB values have shown that
the model has learned that the robot's dynamic properties change when holding a
tool. Motion generation experiments show that the tool-body assimilation model is
capable of applying to unknown tools to generate goal-oriented motions.
C1 [Nishide, Shun; Takahashi, Toru; Okuno, Hiroshi G.; Ogata, Tetsuya] Kyoto Univ,
Grad Sch Informat, Dept Intelligence Sci & Technol, Kyoto, Japan.
[Tani, Jun] RIKEN, Brain Sci Inst, Lab Behav & Dynam Cognit, Saitama, Japan.
RP Nishide, S (reprint author), Kyoto Univ, Grad Sch Informat, Dept Intelligence
Sci & Technol, Kyoto, Japan.
EM nishide@kuis.kyoto-u.ac.jp; tani@brain.riken.jp;
tall@kuis.kyoto-u.ac.jp; okuno@kuis.kyoto-u.ac.jp;
ogata@kuis.kyoto-u.ac.jp
RI Tani, Jun/H-3681-2012
OI Ogata, Tetsuya/0000-0001-7015-0379; Okuno, Hiroshi/0000-0002-8704-4318
FU JST PRESTO (Information Environment and Humans); [19GS0208];
[21300076]
FX This work was supported by JST PRESTO (Information Environment and
Humans), Grant-in-Aid for Creative Scientific Research (19GS0208), and
Grant-in-Aid for Scientific Research (B) (21300076).
CR Arie H, 2009, NEW MATH NAT COMPUT, V5, P307, DOI 10.1142/S1793005709001283
Bajcsy R., 1988, IEEE P, V76, P996
Beck BB., 1980, ANIMAL TOOL BEHAV US
Berti A, 2000, J COGNITIVE NEUROSCI, V12, P415, DOI 10.1162/089892900562237
Gibson JJ, 1966, SENSES CONSIDERED PE
Hikita M., 2008, P IEEE RSJ INT C IRO, P2041
Johnson-Frey SH, 2004, TRENDS COGN SCI, V8, P71, DOI 10.1016/j.tics.2003.12.002
Jordan M., 1986, P 8 ANN C COGN SCI S, P513
Kohonen T., 1988, SELF ORG ASS MEMORY
Luciw MD, 2008, INT C DEVEL LEARN, P115, DOI 10.1109/DEVLRN.2008.4640815
Mah CD, 2003, BIOL CYBERN, V88, P60, DOI [10.1007/s00422-002-0347-9,
10.1007/s00442-002-0347-9]
Maravita A, 2004, TRENDS COGN SCI, V8, P79, DOI 10.1016/j.tics.2003.12.008
Michaels CF, 2007, PERCEPTION, V36, P750, DOI 10.1068/p5593
Nabeshima C., 2007, P 6 IEEE INT C DEV L, P288
Nishide S, 2008, ADV ROBOTICS, V22, P527, DOI 10.1163/156855308X294879
Ogata T, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P162
Ogata T., 2007, P IEEE RSJ INT C INT
POLLACK JB, 1991, MACH LEARN, V7, P227, DOI 10.1023/A:1022651113306
Rumelhart D. E., 1986, PARALLEL DISTRIBUTED
Stoytchev A, 2005, IEEE INT CONF ROBOT, P3060
Streri A, 2005, INFANT BEHAV DEV, V28, P290, DOI 10.1016/j.infbeh.2005.05.004
Takano W., 2008, P IEEE RAS INT C HUM, P708
Tani J, 1996, IEEE T SYST MAN CY B, V26, P421, DOI 10.1109/3477.499793
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
Weng J., 2009, P IEEE 8 INT C DEV L, P1
Yamashita Y, 2008, PLOS COMPUT BIOL, V4, DOI 10.1371/journal.pcbi.1000220
NR 26
TC 11
Z9 11
U1 0
U2 2
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1943-0604
EI 1943-0612
J9 IEEE T AUTON MENT DE
JI IEEE Trans. Auton. Ment. Dev.
PD JUN
PY 2012
VL 4
IS 2
BP 139
EP 149
DI 10.1109/TAMD.2011.2177660
PG 11
WC Computer Science, Artificial Intelligence; Robotics; Neurosciences
SC Computer Science; Robotics; Neurosciences & Neurology
GA 960NT
UT WOS:000305395900003
DA 2018-01-22
ER

PT J
AU Yamashita, Y
Tani, J
AF Yamashita, Yuichi
Tani, Jun
TI Spontaneous Prediction Error Generation in Schizophrenia
SO PLOS ONE
LA English
DT Article
ID ANTERIOR CINGULATE CORTEX; AUDITORY HALLUCINATIONS; SYNAPTIC PLASTICITY;
MOTOR CONTROL; MODELS; DISCONNECTION; DEFICITS; DYSCONNECTION;
CONNECTIVITY; PERFORMANCE
AB Goal-directed human behavior is enabled by hierarchically-organized neural
systems that process executive commands associated with higher brain areas in
response to sensory and motor signals from lower brain areas. Psychiatric diseases
and psychotic conditions are postulated to involve disturbances in these
hierarchical network interactions, but the mechanism for how aberrant disease
signals are generated in networks, and a systems-level framework linking disease
signals to specific psychiatric symptoms remains undetermined. In this study, we
show that neural networks containing schizophrenia-like deficits can spontaneously
generate uncompensated error signals with properties that explain psychiatric
disease symptoms, including fictive perception, altered sense of self, and
unpredictable behavior. To distinguish dysfunction at the behavioral versus network
level, we monitored the interactive behavior of a humanoid robot driven by the
network. Mild perturbations in network connectivity resulted in the spontaneous
appearance of uncompensated prediction errors and altered interactions within the
network without external changes in behavior, correlating to the fictive sensations
and agency experienced by episodic disease patients. In contrast, more severe
deficits resulted in unstable network dynamics resulting in overt changes in
behavior similar to those observed in chronic disease patients. These findings
demonstrate that prediction error disequilibrium may represent an intrinsic
property of schizophrenic brain networks reporting the severity and variability of
disease symptoms. Moreover, these results support a systems-level model for
psychiatric disease that features the spontaneous generation of maladaptive signals
in hierarchical neural networks.
C1 [Yamashita, Yuichi; Tani, Jun] RIKEN Brain Sci Inst, Lab Behav & Dynam Cognit,
Wako, Saitama, Japan.
RP Yamashita, Y (reprint author), JST, ERATO, Okanoya Emot Informat Project, Wako,
Saitama, Japan.
EM tani1216jp@gmail.com
RI Tani, Jun/H-3681-2012
FU KAKENHI [23700279]
FX This work was supported by KAKENHI (#23700279). The funders had no role
in study design, data collection and analysis, decision to publish, or
preparation of the manuscript.
CR Banyai M, 2011, NEUROIMAGE, V58, P870, DOI 10.1016/j.neuroimage.2011.06.046
Bassett DS, 2011, TRENDS COGN SCI, V15, P200, DOI 10.1016/j.tics.2011.03.006
Becerril KE, 2011, NEUROIMAGE, V54, P1495, DOI 10.1016/j.neuroimage.2010.09.046
Blakemore SJ, 2000, PSYCHOL MED, V30, P1131, DOI 10.1017/S0033291799002676
Botvinick MM, 2008, TRENDS COGN SCI, V12, P201, DOI 10.1016/j.tics.2008.02.009
Carter CS, 1998, SCIENCE, V280, P747, DOI 10.1126/science.280.5364.747
Churchland MM, 2010, NEURON, V68, P387, DOI 10.1016/j.neuron.2010.09.015
Cocchi L, 2012, HUM BRAIN MAPP, V33, P1089, DOI 10.1002/hbm.21270
Corlett PR, 2007, BRAIN, V130, P2387, DOI 10.1093/brain/awm173
Corlett PR, 2011, NEUROPSYCHOPHARMACOL, V36, P294, DOI 10.1038/npp.2010.163
Courchesne E, 2005, CURR OPIN NEUROBIOL, V15, P225, DOI
10.1016/j.conb.2005.03.001
Eagleman DM, 2000, SCIENCE, V287, P2036, DOI 10.1126/science.287.5460.2036
ELMAN JL, 1990, COGNITIVE SCI, V14, P179, DOI 10.1016/0364-0213(90)90002-E
Fetz EE, 2002, HDB BRAIN THEORY NEU
Fourneret P, 2001, NEUROREPORT, V12, P1203, DOI 10.1097/00001756-200105080-00030
Friston K, 2011, NEURON, V72, P488, DOI 10.1016/j.neuron.2011.10.018
Friston KJ, 2009, TRENDS COGN SCI, V13, P293, DOI 10.1016/j.tics.2009.04.005
FRISTON KJ, 1995, CLIN NEUROSCI, V3, P89
Friston KJ, 2005, BEHAV BRAIN SCI, V28, P764
Frith CD, 2000, PHILOS T R SOC B, V355, P1771, DOI 10.1098/rstb.2000.0734
Fuster JM, 2001, NEURON, V30, P319, DOI 10.1016/S0896-6273(01)00285-9
Gallagher S, 2004, PSYCHOPATHOLOGY, V37, P8, DOI 10.1159/000077014
Goto Y., 2009, BIOL PSYCHIAT, V67, P199
Jeannerod M, 2009, EXP BRAIN RES, V192, P527, DOI 10.1007/s00221-008-1533-3
JORDAN MI, 1992, COGNITIVE SCI, V16, P307, DOI 10.1016/0364-0213(92)90036-T
Kanai R, 2004, VISION RES, V44, P2605, DOI 10.1016/j.visres.2003.10.028
Kawato M, 1999, CURR OPIN NEUROBIOL, V9, P718, DOI 10.1016/S0959-4388(99)00028-8
Kerns JG, 2005, AM J PSYCHIAT, V162, P1833, DOI 10.1176/appi.ajp.162.10.1833
Kilner James M, 2007, Cogn Process, V8, P159, DOI 10.1007/s10339-007-0170-2
Lawrie SM, 2002, BIOL PSYCHIAT, V51, P1008, DOI 10.1016/S0006-3223(02)01316-1
Mazaheri A, 2010, BIOL PSYCHIAT, V67, P617, DOI 10.1016/j.biopsych.2009.11.022
Quintana J, 2003, BIOL PSYCHIAT, V53, P12, DOI 10.1016/S0006-3223(02)01435-X
Rao RPN, 1999, NAT NEUROSCI, V2, P79, DOI 10.1038/4580
Rumelhart D. E., 1986, PARALLEL DISTRIBUTED
Schneider K, 1991, KLINISCHE PSYCHOPATH
Shergill SS, 2005, AM J PSYCHIAT, V162, P2384, DOI 10.1176/appi.ajp.162.12.2384
Spence SA, 1997, BRAIN, V120, P1997, DOI 10.1093/brain/120.11.1997
Stephan KE, 2009, SCHIZOPHRENIA BULL, V35, P509, DOI 10.1093/schbul/sbn176
Stephan KE, 2006, BIOL PSYCHIAT, V59, P929, DOI 10.1016/j.biopsych.2005.10.005
Tani J, 2003, NEURAL NETWORKS, V16, P11, DOI 10.1016/S0893-6080(02)00214-9
Tani J, 2008, NEURAL NETWORKS, V21, P584, DOI 10.1016/j.neunet.2008.03.008
Umbricht D, 2000, ARCH GEN PSYCHIAT, V57, P1139, DOI 10.1001/archpsyc.57.12.1139
Vercammen A, 2010, BIOL PSYCHIAT, V67, P912, DOI 10.1016/j.biopsych.2009.11.017
Williams LE, 2010, SCHIZOPHR RES, V121, P101, DOI 10.1016/j.schres.2009.10.021
Yamashita Y, 2011, FRONT COMPUT NEUROSC, V5, DOI 10.3389/fncom.2011.00018
Yamashita Y, 2008, PLOS COMPUT BIOL, V4, DOI 10.1371/journal.pcbi.1000220
Yung AR, 1996, SCHIZOPHRENIA BULL, V22, P353, DOI 10.1093/schbul/22.2.353
NR 47
TC 6
Z9 6
U1 0
U2 12
PU PUBLIC LIBRARY SCIENCE
PI SAN FRANCISCO
PA 1160 BATTERY STREET, STE 100, SAN FRANCISCO, CA 94111 USA
SN 1932-6203
J9 PLOS ONE
JI PLoS One
PD MAY 30
PY 2012
VL 7
IS 5
AR e37843
DI 10.1371/journal.pone.0037843
PG 8
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA 959YF
UT WOS:000305353400052
PM 22666398
OA gold
DA 2018-01-22
ER

PT J
AU Sugimoto, N
Morimoto, J
Hyon, SH
Kawato, M
AF Sugimoto, Norikazu
Morimoto, Jun
Hyon, Sang-Ho
Kawato, Mitsuo
TI The eMOSAIC model for humanoid robot control
SO NEURAL NETWORKS
LA English
DT Article
DE Modular architecture; Nonlinear and non-stationary control problem;
Humanoid robot; Computational neuroscience
ID INTERNAL-MODELS; REINFORCEMENT; SPACE
AB In this study, we propose an extension of the MOSAIC architecture to control
real humanoid robots. MOSAIC was originally proposed by neuroscientists to
understand the human ability of adaptive control. The modular architecture of the
MOSAIC model can be useful for solving nonlinear and non-stationary control
problems. Both humans and humanoid robots have nonlinear body dynamics and many
degrees of freedom. Since they can interact with environments (e.g., carrying
objects), control strategies need to deal with non-stationary dynamics. Therefore,
MOSAIC has strong potential as a human motor-control model and a control framework
for humanoid robots. Yet application of the MOSAIC model has been limited to simple
simulated dynamics since it is susceptive to observation noise and also cannot be
applied to partially observable systems. Our approach introduces state estimators
into MOSAIC architecture to cope with real environments. By using an extended
MOSAIC model, we are able to successfully generate squatting and object-carrying
behaviors on a real humanoid robot. (C) 2012 Elsevier Ltd. All rights reserved.
C1 [Sugimoto, Norikazu] Natl Inst Commun Telecommun, Kyoto 6190288, Japan.
[Sugimoto, Norikazu; Morimoto, Jun; Hyon, Sang-Ho; Kawato, Mitsuo] ATR Computat
Neurosci Labs, Kyoto 6190288, Japan.
[Hyon, Sang-Ho] Ritsumeikan Univ, Dept Robot, Kusatsu, Shiga 5258577, Japan.
RP Sugimoto, N (reprint author), Natl Inst Commun Telecommun, 2-2-2 Hikaridai Seiko
Cho, Kyoto 6190288, Japan.
EM xsugi@nict.go.jp; xmorimo@atr.jp; gen@fc.ritsumei.ac.jp; kawato@atr.jp
FU Strategic Research Program for Brain Sciences (SRPBS); MEXT KAKENHI
[23120004]
FX This research was supported by the Strategic Research Program for Brain
Sciences (SRPBS). This work was partially supported by MEXT KAKENHI
23120004. The authors gratefully acknowledge N. Nakano for assistance
with the experimental setup.
CR Astrom K., 1989, ADAPTIVE CONTROL
Atkeson C. G., 2007, IEEE RAS 2007 INT C, P57
Chesi G, 2010, IEEE T AUTOMAT CONTR, V55, P2500, DOI 10.1109/TAC.2010.2046926
CRUZ JB, 1964, IEEE T AUTOMAT CONTR, VAC 9, P216, DOI 10.1109/TAC.1964.1105720
Dietterich TG, 2000, J ARTIF INTELL RES, V13, P227
Doya K, 2000, NEURAL COMPUT, V12, P219, DOI 10.1162/089976600300015961
Doya K, 2002, NEURAL COMPUT, V14, P1347, DOI 10.1162/089976602753712972
FUJII T, 1984, SIAM J CONTROL OPTIM, V22, P327, DOI 10.1137/0322022
Ghahramani Z, 2000, NEURAL COMPUT, V12, P831, DOI 10.1162/089976600300015619
Grimes D. B., 2006, P ROB SCI SYST RSS 0
Haruno M., 2001, NEURAL COMPUTATION, V13
Hauskrecht M., 1998, Uncertainty in Artificial Intelligence. Proceedings of the
Fourteenth Conference (1998), P220
Hyon S.H., 2009, IEEE ICRA, P1549, DOI 10.1109/ROBOT.2009.5152434
Imamizu H, 2004, J NEUROSCI, V24, P1173, DOI 10.1523/JNEUROSCI.4011-03.2004
Imamizu H, 2007, EXP BRAIN RES, V181, P395, DOI 10.1007/s00221-007-0940-1
Kalman R., 1961, ASME, V83, P95
Lewis F. L., 1986, OPTIMAL ESTIMATION
Lim S., 1998, AM CONTR C
Morimoto J, 2001, ROBOT AUTON SYST, V36, P37, DOI 10.1016/S0921-8890(01)00113-0
MORSE AS, 1996, LECT NOTES CONTROL I
Nakaoka S., 2004, P INT C VIRT SYST MU
PERKINS WR, 1971, IEEE T AUTOMAT CONTR, VAC16, P659, DOI
10.1109/TAC.1971.1099823
Rugh W. J., 1991, IEEE Control Systems Magazine, V11, P79, DOI 10.1109/37.103361
Rugh WJ, 2000, AUTOMATICA, V36, P1401, DOI 10.1016/S0005-1098(00)00058-3
Samejima K, 2003, NEURAL NETWORKS, V16, P985, DOI 10.1016/S0893-6080(02)00235-6
Shamma J. S., 1992, IEEE CONTROL SYSTEMS, V6, P101
Slotine J.J.E., 1991, APPL NONLINEAR CONTR
Stephens B., 2007, IEEE RAS 2007 INT C, P589
Sutton RS, 1999, ARTIF INTELL, V112, P181, DOI 10.1016/S0004-3702(99)00052-1
Thrun S., 2005, PROBABILISTIC ROBOTI
Wiering M, 1997, ADAPT BEHAV, V6, P219, DOI 10.1177/105971239700600202
WOLPERT DM, 1995, SCIENCE, V269, P1880, DOI 10.1126/science.7569931
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
NR 33
TC 12
Z9 12
U1 0
U2 3
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 0893-6080
EI 1879-2782
J9 NEURAL NETWORKS
JI Neural Netw.
PD MAY
PY 2012
VL 29-30
BP 8
EP 19
DI 10.1016/j.neunet.2012.01.002
PG 12
WC Computer Science, Artificial Intelligence; Neurosciences
SC Computer Science; Neurosciences & Neurology
GA 944YL
UT WOS:000304239600004
PM 22366503
DA 2018-01-22
ER

PT J
AU Arie, H
Arakaki, T
Sugano, S
Tani, J
AF Arie, Hiroaki
Arakaki, Takafumi
Sugano, Shigeki
Tani, Jun
TI Imitating others by composition of primitive actions: A neuro-dynamic
model
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Cognitive robotics; Neural network; Imitation; Dynamical system
ID POSTERIOR PARIETAL CORTEX; SELF-ORGANIZATION; MOTOR; BEHAVIOR;
MOVEMENTS; REVIEWS; SYSTEMS
AB This paper introduces a novel neuro-dynamical model that accounts for possible
mechanisms of action imitation and learning. It is considered that imitation
learning requires at least two classes of generalization. One is generalization
over sensory-motor trajectory variances, and the other class is on cognitive level
which concerns on more qualitative understanding of compositional actions by own
and others which do not necessarily depend on exact trajectories. This paper
describes a possible model dealing with these classes of generalization by focusing
on the problem of action compositionality. The model was evaluated in the
experiments using a small humanoid robot. The robot was trained with a set of
different actions concerning object manipulations which can be decomposed into
sequences of action primitives. Then the robot was asked to imitate a novel
compositional action demonstrated by a human subject which are composed from prior-
learned action primitives. The results showed that the novel action can be
successfully imitated by decomposing and composing it with the primitives by means
of organizing unified intentional representation hosted by mirror neurons even
though the trajectory-level appearance is different between the ones of observed
and those of self-generated. (C) 2011 Elsevier B.V. All rights reserved.
C1 [Arie, Hiroaki; Tani, Jun] RIKEN, Brain Sci Inst, Lab Behav & Dynam Cognit,
Wako, Saitama, Japan.
[Arakaki, Takafumi; Sugano, Shigeki] Waseda Univ, Dept Adv Mech Engn, Tokyo,
Japan.
[Sugano, Shigeki] Waseda Univ, Dept Mech Engn, Tokyo, Japan.
[Sugano, Shigeki] Waseda Univ, Humanoid Robot Inst, Tokyo, Japan.
RP Arie, H (reprint author), RIKEN, Brain Sci Inst, Lab Behav & Dynam Cognit, Wako,
Saitama, Japan.
EM arie@bdc.brain.riken.jp
RI Tani, Jun/H-3681-2012
FU Ministry of Education, Culture, Sports, Science and Technology (MEXT),
Japan
FX This work was supported in part by funds from the Grant-in-Aid for
Scientific Research in Priority Areas "Emergence of Adaptive Motor
Function through Interaction among the Body, Brain and Environment: A
Constructive Approach to the Undergoing of Mobiligence" from Ministry of
Education, Culture, Sports, Science and Technology (MEXT), Japan.
CR Arbib M., 1981, HDB PHYSL NERVOUS SY, P1448
Arie H., 2009, CREATING NOVEL GOAL
Billard A, 2001, ROBOT AUTON SYST, V37, P145, DOI 10.1016/S0921-8890(01)00155-5
CALINON S, 2007, IEEE T SYSTEMS MAN B, V37
Cangelosi A, 2000, CONNECT SCI, V12, P143, DOI 10.1080/09540090050129763
COLBY CL, 1993, J NEUROPHYSIOL, V69, P902
CRUTCHFIELD JP, 1989, PHYS REV LETT, V63, P105, DOI 10.1103/PhysRevLett.63.105
Nehaniv C., 2002, IMITATION ANIMALS AR
DOHERTY P, 1999, LINKOPING ELECT ARTI, V4
DOYA K, 1989, P 1989 INT JOINT C N, V1, P27
Ehrsson HH, 2003, J NEUROPHYSIOL, V90, P2978, DOI 10.1152/jn.00958.2002
ELMAN JL, 1990, COGNITIVE SCI, V14, P179, DOI 10.1016/0364-0213(90)90002-E
Eskandar EN, 1999, NAT NEUROSCI, V2, P88
Evans G., 1981, WITTGENSTEIN FOLLOW, P118
Fagg AH, 1998, NEURAL NETWORKS, V11, P1277, DOI 10.1016/S0893-6080(98)00047-1
Fuster J. M., 1989, PREFRONTAL CORTEX
Hamad S, 1990, PHYSICA D, V42, P335, DOI DOI 10.1016/0167-2789(90)90087-6
Hao B, 1989, ELEMENTARY SYMBOLIC
Hamad S., 1993, ARTIFICIAL LIFE ROUT
Imazu S, 2007, CORTEX, V43, P301, DOI 10.1016/S0010-9452(08)70456-8
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
MATARIC MJ, 1992, IEEE T ROBOTIC AUTOM, V8, P304, DOI 10.1109/70.143349
Murata A., 2007, REPRESENTATION BRAIN, P151
Mushiake M.I.H., 1991, J NEUROPHYSIOL, V66, P705
Namikawa J, 2008, NEURAL NETWORKS, V21, P1466, DOI 10.1016/j.neunet.2008.09.005
Nishimoto R, 2008, ADAPT BEHAV, V16, P166, DOI 10.1177/1059712308089185
Ogawa K, 2007, J COGNITIVE NEUROSCI, V19, P1827, DOI
10.1162/jocn.2007.19.11.1827
Oztop E, 2005, COGNITIVE BRAIN RES, V22, P129, DOI
10.1016/j.cogbrainres.2004.08.004
POLLACK JB, 1991, MACH LEARN, V7, P227, DOI 10.1023/A:1022651113306
RAO R, 2004, IMITATION SOCIAL LEA
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
Rumelhart DE, 1986, PARALLEL DISTRIBUTED, V1
SAARINEN J, 1985, PERCEPTION, V14, P711, DOI 10.1068/p140711
Shepard RN, 1982, MENTAL IMAGES THEIR
Sun R., 2002, CONNECTIONIST IMPLEM
Tani J, 1996, IEEE T SYST MAN CY B, V26, P421, DOI 10.1109/3477.499793
Tani J, 2004, NEURAL NETWORKS, V17, P1273, DOI 10.1016/j.neunet.2004.05.007
Tani J, 1999, NEURAL NETWORKS, V12, P1131, DOI 10.1016/S0893-6080(99)00060-X
TANI J, 1995, BIOL CYBERN, V72, P365, DOI 10.1007/s004220050138
Tani J, 2003, NEURAL NETWORKS, V16, P11, DOI 10.1016/S0893-6080(02)00214-9
Tani J., 1998, J CONSCIOUSNESS STUD, V5, P516
Tani J, 2008, NEURAL NETWORKS, V21, P584, DOI 10.1016/j.neunet.2008.03.008
Williams RJ, 1989, NEURAL COMPUT, V1, P270, DOI 10.1162/neco.1989.1.2.270
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
Yamashita Y., 2008, PLOS COMPUTATIONAL B, V4
NR 46
TC 10
Z9 10
U1 2
U2 7
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD MAY
PY 2012
VL 60
IS 5
SI SI
BP 729
EP 741
DI 10.1016/j.robot.2011.11.005
PG 13
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 929RD
UT WOS:000303082300010
DA 2018-01-22
ER

PT J
AU Perrin, N
Stasse, O
Baudouin, L
Lamiraux, F
Yoshida, E
AF Perrin, Nicolas
Stasse, Olivier
Baudouin, Leo
Lamiraux, Florent
Yoshida, Eiichi
TI Fast Humanoid Robot Collision-Free Footstep Planning Using Swept Volume
Approximations
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Footstep generation; humanoid robots; motion planning; obstacle
avoidance
AB In this paper, we propose a novel and coherent framework for fast footstep
planning for legged robots on a flat ground with 3-D obstacle avoidance. We use
swept volume approximations that are computed offline in order to considerably
reduce the time spent in collision checking during the online planning phase, in
which a rapidly exploring random tree variant is used to find collision-free
sequences of half-steps (which are produced by a specific walking pattern
generator). Then, an original homotopy is used to smooth the sequences into natural
motions, gently avoiding the obstacles. The results are experimentally validated on
the robot HRP-2.
C1 [Perrin, Nicolas; Stasse, Olivier; Baudouin, Leo; Lamiraux, Florent] Univ
Toulouse UPS, INSA, INP, ISAE,CNRS,LAAS, F-31077 Toulouse, France.
[Perrin, Nicolas; Yoshida, Eiichi] CRT, UMI3218, AIST Joint Robot Lab, CNRS,
Tsukuba, Ibaraki 3058568, Japan.
RP Perrin, N (reprint author), Univ Toulouse UPS, INSA, INP, ISAE,CNRS,LAAS, F-
31077 Toulouse, France.
EM nperrin@laas.fr; ostasse@laas.fr; leo.baudouin@ifma.fr; florent@laas.fr;
e.yoshida@aist.go.jp
RI Yoshida, Eiichi/M-3756-2016
OI Yoshida, Eiichi/0000-0002-3077-6964
FU RBLINK [ANR-08-JCJC-0075-01]
FX This work was supported by a grant from the RBLINK Project under
Contract ANR-08-JCJC-0075-01.
CR AYAZ Y, 2006, P IEEE RSJ INT C INT, P5490
Baudouin L., 2011, P 11 IEEE RAS INT C
Benallegue M., 2009, P IEEE INT C ROB AUT, P483
Bourgeot JM, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2509, DOI 10.1109/IRDS.2002.1041646
CHESTNUTT J, 2007, THESIS CARNEGIE MELL
Chestnutt J., 2007, P IEEE RAS INT C HUM, P196
Chestnutt J., 2003, P IEEE INT C HUM ROB
Chestnutt J., 2005, P IEEE INT C ROB AUT, P631
Chestnutt J, 2006, IEEE INT CONF ROBOT, P860, DOI 10.1109/ROBOT.2006.1641817
Elmogy M., 2009, P 2009 IEEE INT C IN, P3531
Frisken SF, 2000, COMP GRAPH, P249
Gonzalez J. P., 2011, P 4 ANN S COMB SEARC
Harada K., 2010, MOTION PLANNING HUMA, P192
Harada K, 2006, INT J HUM ROBOT, V3, P1, DOI 10.1142/S0219843606000643
Hasegawa T, 2003, PROCEEDINGS OF THE 2003 IEEE INTERNATIONAL SYMPOSIUM ON
ASSEMBLY AND TASK PLANNING (ISATP2003), P259, DOI 10.1109/ISATP.2003.1217221
Herdt A, 2010, P IEEE RSJ INT C INT, P190
Himmelstein JC, 2010, IEEE T AUTOM SCI ENG, V7, P177, DOI
10.1109/TASE.2008.2010553
Jaillet L., 2006, P 7 WORKSH ALG FDN R
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Karaman S., 2010, P ROBOT SCI SYST 6 C
Kavraki LE, 1996, IEEE T ROBOTIC AUTOM, V12, P566, DOI 10.1109/70.508439
KIM Y, 2003, P ACM S SOL MOD APPL, P11
Kuffner JJ, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P500, DOI
10.1109/IROS.2001.973406
Larsen E., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P3719, DOI 10.1109/ROBOT.2000.845311
Lauterbach C, 2010, COMPUT GRAPH FORUM, V29, P419, DOI 10.1111/j.1467-
8659.2009.01611.x
LaValle SM, 2000, P WORKSH ALG FDN ROB, P293
NAKAMURA Y, 1987, INT J ROBOT RES, V6, P32, DOI 10.1177/027836498700600103
Nishiwaki K, 2001, IEEE INT CONF ROBOT, P4110, DOI 10.1109/ROBOT.2001.933260
NISHIWAKI K, 1999, P IEEE INT C SYST MA, P902
Perrin N., 2010, ADAPTIVE SAMPLING BA, VTech. Rep.
Perrin N., 2011, P IEEE INT C ROB AUT, P1270
Perrin N, 2010, IEEE INT CONF ROBOT, P4243, DOI 10.1109/ROBOT.2010.5509395
Schmidl H., 2004, Journal of Graphics Tools, V9, P1
Schwarzer F., 2002, P 5 WORKSH ALG FDN R
Sucan I. A., 2008, P 8 WORKSH ALG FDN R
Tang M., 2010, P 9 WORKSH ALG FDN R, P229
Vukobratovic M, 2004, INT J HUM ROBOT, V1, P157
Xia ZY, 2009, IEEE ASME INT C ADV, P168, DOI 10.1109/AIM.2009.5230019
Yoshida E., 2005, P 2005 IEEE RAS INT, P1
Zhang XY, 2007, ACM T GRAPHIC, V26, DOI 10.1145/1239451.1239466
NR 40
TC 32
Z9 33
U1 0
U2 9
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD APR
PY 2012
VL 28
IS 2
BP 427
EP 439
DI 10.1109/TRO.2011.2172152
PG 13
WC Robotics
SC Robotics
GA 921VC
UT WOS:000302505100012
DA 2018-01-22
ER

PT J
AU Saygin, AP
Chaminade, T
Ishiguro, H
Driver, J
Frith, C
AF Saygin, Ayse Pinar
Chaminade, Thierry
Ishiguro, Hiroshi
Driver, Jon
Frith, Chris
TI The thing that should not be: predictive coding and the uncanny valley
in perceiving human and humanoid robot actions
SO SOCIAL COGNITIVE AND AFFECTIVE NEUROSCIENCE
LA English
DT Article
DE functional magnetic resonance imaging (fMRI); repetition suppression;
action perception; predictive coding; temporal cortex; anterior
intraparietal sulcus; mirror neuron system; extrastriate body area
ID BIOLOGICAL MOTION PERCEPTION; MIRROR-NEURON SYSTEM; HUMAN PREMOTOR
CORTEX; PARIETAL CORTEX; MOTOR CONTROL; FRONTOPARIETAL CORTEX;
INTRAPARIETAL SULCUS; SOCIAL-INTERACTION; MACAQUE MONKEY; FRONTAL-CORTEX
AB Using functional magnetic resonance imaging (fMRI) repetition suppression, we
explored the selectivity of the human action perception system (APS), which
consists of temporal, parietal and frontal areas, for the appearance and/or motion
of the perceived agent. Participants watched body movements of a human (biological
appearance and movement), a robot (mechanical appearance and movement) or an
android (biological appearance, mechanical movement). With the exception of
extrastriate body area, which showed more suppression for human like appearance,
the APS was not selective for appearance or motion per se. Instead, distinctive
responses were found to the mismatch between appearance and motion: whereas
suppression effects for the human and robot were similar to each other, they were
stronger for the android, notably in bilateral anterior intraparietal sulcus, a key
node in the APS. These results could reflect increased prediction error as the
brain negotiates an agent that appears human, but does not move biologically, and
help explain the 'uncanny valley' phenomenon.
C1 [Saygin, Ayse Pinar] Univ Calif San Diego, Dept Cognit Sci, La Jolla, CA 92093
USA.
[Saygin, Ayse Pinar] Univ Calif San Diego, Neurosci Program, La Jolla, CA 92093
USA.
[Saygin, Ayse Pinar; Chaminade, Thierry; Driver, Jon; Frith, Chris] UCL,
Wellcome Trust Ctr Neuroimaging, London, England.
[Chaminade, Thierry] Aix Marseille Univ, CNRS, Mediterranean Inst Cognit
Neurosci INCM, Marseille, France.
[Ishiguro, Hiroshi] Osaka Univ, Dept Syst Innovat, Osaka, Japan.
[Ishiguro, Hiroshi] ATR, Intelligent Robot & Commun Labs, Keihanna Sci City,
Kyoto, Japan.
[Frith, Chris] Univ Aarhus, DK-8000 Aarhus C, Denmark.
RP Saygin, AP (reprint author), Univ Calif San Diego, Dept Cognit Sci, 9500 Gilman
Dr,MC 0515, La Jolla, CA 92093 USA.
EM saygin@cogsci.ucsd.edu
RI Chaminade, Thierry/F-8367-2010; Saygin, Ayse Pinar/A-7126-2009
OI Chaminade, Thierry/0000-0003-4952-1467; Saygin, Ayse
Pinar/0000-0002-0859-9500
FU Kavli Institute for Brain and Mind, University of California, San Diego;
California Institute for Telecommunication and Information Technology
(Calit2); Danish National Research Foundation; Japan Society for the
Promotion of Science; Royal Society
FX The authors thank Patrick Haggard, Antonia Hamilton, James Kilner,
Takashi Minato, Javier Movellan, Marty Sereno, Osaka University
Intelligent Robotics Laboratory and the Wellcome Trust Centre for
Neuroimaging. This research was funded by an innovative research grant
to APS from the Kavli Institute for Brain and Mind, University of
California, San Diego. APS was additionally supported by California
Institute for Telecommunication and Information Technology (Calit2); CF
by Danish National Research Foundation; HI by Japan Society for the
Promotion of Science; JD by The Royal Society. Repliee Q2 was developed
in collaboration with Kokoro Inc.
CR Aitkenhead MJ, 2006, ARTIF INTELL REV, V25, P247, DOI 10.1007/s10462-007-9063-0
Bar M, 2009, PHILOS T R SOC B, V364, P1181, DOI 10.1098/rstb.2008.0321
Barlow H, 2001, BEHAV BRAIN SCI, V24, P602
Barsalou LW, 2009, PHILOS T R SOC B, V364, P1281, DOI 10.1098/rstb.2008.0319
Bartels A, 2008, TRENDS NEUROSCI, V31, P444, DOI 10.1016/j.tins.2008.06.004
Blake R, 2007, ANNU REV PSYCHOL, V58, P47, DOI
10.1146/annurev.psych.57.102904.190152
Buccino G, 2004, J COGNITIVE NEUROSCI, V16, P114, DOI 10.1162/089892904322755601
Calvo-Merino B, 2006, CURR BIOL, V16, P1905, DOI 10.1016/j.cub.2006.07.065
Candidi M, 2008, SOC NEUROSCI, V3, P388, DOI 10.1080/17470910701676269
Catmur C, 2007, CURR BIOL, V17, P1527, DOI 10.1016/j.cub.2007.08.006
Chaminade T, 2007, SOC COGN AFFECT NEUR, V2, P206, DOI 10.1093/scan/nsm017
Chaminade T, 2006, INTERACT STUD, V7, P347, DOI 10.1075/is.7.3.07cha
Chaminade T, 2010, PLOS ONE, V5, DOI 10.1371/journal.pone.0011577
Chong TTJ, 2008, CURR BIOL, V18, P1576, DOI 10.1016/j.cub.2008.08.068
Coradeschi S, 2006, IEEE INTELL SYST, V21, P74, DOI 10.1109/MIS.2006.72
Craigherol L, 2007, PROG BRAIN RES, V164, P39, DOI 10.1016/S0079-6123(07)64003-5
Cross ES, 2006, NEUROIMAGE, V31, P1257, DOI 10.1016/j.neuroimage.2006.01.033
Culham JC, 2006, CURR OPIN NEUROBIOL, V16, P205, DOI 10.1016/j.conb.2006.03.005
Dautenhahn K, 2007, PHILOS T R SOC B, V362, P679, DOI 10.1098/rstb.2006.2004
DESIMONE R, 1995, ANNU REV NEUROSCI, V18, P193, DOI
10.1146/annurev.ne.18.030195.001205
Dinstein I, 2007, J NEUROPHYSIOL, V98, P1415, DOI 10.1152/jn.00238.2007
Freud S., 1919, Das Unheimliche. SE, V17, P217, Patent No. 17217256
Friston KJ, 2010, NAT REV NEUROSCI, V11, P127, DOI 10.1038/nrn2787
Friston KJ, 2005, PHILOS T R SOC B, V360, P815, DOI 10.1098/rstb.2005.1622
Fujii N, 2008, SOC NEUROSCI, V3, P250, DOI 10.1080/17470910701434610
Gazzola V, 2007, NEUROIMAGE, V35, P1674, DOI 10.1016/j.neuroimage.2007.02.003
Gibson J. J., 1979, ECOLOGICAL APPROACH
Grafton ST, 2007, HUM MOVEMENT SCI, V26, P590, DOI 10.1016/j.humov.2007.05.009
Grefkes C, 2005, J ANAT, V207, P3, DOI 10.1111/j.1469-7580.2005.00426.x
GREGORY RL, 1980, PHILOS T ROY SOC B, V290, P181, DOI 10.1098/rstb.1980.0090
Grill-Spector K, 2006, TRENDS COGN SCI, V10, P14, DOI 10.1016/j.tics.2005.11.006
Grosse-Wilde E, 2010, FRONT CELL NEUROSCI, V4, DOI 10.3389/fncel.2010.00022
Hamilton AFD, 2006, J NEUROSCI, V26, P1133, DOI 10.1523/JNEUROSCI.4551-05.2006
Hamilton AFD, 2008, CEREB CORTEX, V18, P1160, DOI 10.1093/cercor/bhm150
Henson RNA, 2003, NEUROPSYCHOLOGIA, V41, P263, DOI 10.1016/S0028-3932(02)00159-8
Hetfield J., 1986, MASTER PUPPETS
Ho CC, 2010, COMPUT HUM BEHAV, V26, P1508, DOI 10.1016/j.chb.2010.05.015
Ishiguro H, 2006, CONNECT SCI, V18, P319, DOI 10.1080/09540090600873953
Jakobs O, 2009, NEUROIMAGE, V47, P667, DOI 10.1016/j.neuroimage.2009.04.065
Jastorff J, 2009, J NEUROSCI, V29, P7315, DOI 10.1523/JNEUROSCI.4870-08.2009
Jentsch E., 1995, ANGELAKI, V2, P7
Kahn PH, 2007, INTERACT STUD, V8, P363
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kanda T, 2008, IEEE T ROBOT, V24, P725, DOI 10.1109/TRO.2008.921566
Kawato M, 1999, CURR OPIN NEUROBIOL, V9, P718, DOI 10.1016/S0959-4388(99)00028-8
Kiebel Stefan J, 2009, Front Neuroinform, V3, P20, DOI 10.3389/neuro.11.020.2009
Kilner JM, 2007, NEUROREPORT, V18, P619, DOI 10.1097/WNR.0b013e3281139ed0
Kilner JM, 2009, J NEUROSCI, V29, P10153, DOI 10.1523/JNEUROSCI.2668-09.2009
Kilner JM, 2003, CURR BIOL, V13, P522, DOI 10.1016/S0960-9822(03)00165-9
Krekelberg B, 2006, TRENDS NEUROSCI, V29, P250, DOI 10.1016/j.tins.2006.02.008
Kriegeskorte N, 2009, NAT NEUROSCI, V12, P535, DOI 10.1038/nn.2303
Lange J, 2006, J NEUROSCI, V26, P2894, DOI 10.1523/JNEUROSCI.4915-05.2006
Lestou V, 2008, J COGNITIVE NEUROSCI, V20, P324, DOI 10.1162/jocn.2008.20021
Levi S., 2004, NEWSWEEK, V650, P305
Lovecraft H. P., 1984, DUNWICH HORROR OTHER
MacDorman KF, 2006, INTERACT STUD, V7, P297, DOI 10.1075/is.7.3.03mac
MacDorman KF, 2009, AI SOC, V23, P485, DOI 10.1007/s00146-008-0181-2
MacDorman KF, 2009, COMPUT HUM BEHAV, V25, P695, DOI 10.1016/j.chb.2008.12.026
Matelli M, 2001, NEUROIMAGE, V14, pS27, DOI 10.1006/nimg.2001.0835
Miall RC, 1996, NEURAL NETWORKS, V9, P1265, DOI 10.1016/S0893-6080(96)00035-4
Mori M., 1970, ENERGY, V7, P33, DOI DOI 10.1109/MRA.2012.2192811
Mukamel R, 2010, CURR BIOL, V20, P750, DOI 10.1016/j.cub.2010.02.045
Oberman LM, 2007, NEUROCOMPUTING, V70, P2194, DOI 10.1016/j.neucom.2006.02.024
Peelen MV, 2007, NAT REV NEUROSCI, V8, P636, DOI 10.1038/nrn2195
Pelphrey KA, 2003, J NEUROSCI, V23, P6819
Perani D, 2001, NEUROIMAGE, V14, P749, DOI 10.1006/nimg.2001.0872
PETRIDES M, 1988, J COMP NEUROL, V273, P52, DOI 10.1002/cne.902730106
Pobric G, 2006, CURR BIOL, V16, P524, DOI 10.1016/j.cub.2006.01.033
Pollick F.E., 2009, UC MEDIA 2009, P69
Pollick F.E., 2005, INT J HUM ROBOT, V3, P277
Press C, 2007, P ROY SOC B-BIOL SCI, V274, P2509, DOI 10.1098/rspb.2007.0774
Rao RPN, 1999, NAT NEUROSCI, V2, P79, DOI 10.1038/4580
Rizzolatti G, 2001, NAT REV NEUROSCI, V2, P661, DOI 10.1038/35090060
Rizzolatti G, 2004, ANNU REV NEUROSCI, V27, P169, DOI
10.1146/annurev.neuro.27.070203.144230
Rozzi S, 2006, CEREB CORTEX, V16, P1389, DOI 10.1093/cercor/bhj076
Sanchez-Vives MV, 2005, NAT REV NEUROSCI, V6, P332, DOI 10.1038/nrn1651
Saygin AP, 2002, J PRAGMATICS, V34, P227, DOI 10.1016/S0378-2166(02)80001-7
Saygin AP, 2004, NEUROPSYCHOLOGIA, V42, P1788, DOI
10.1016/j.neuropschologia.2004.04.016
Saygin AP, 2004, J NEUROSCI, V24, P6181, DOI 10.1523/JNEUROSCI.0504-04.2004
Saygin A.P., 2010, ANN M COGN SCI SOC A
Saygin AP, 2007, BRAIN, V130, P2452, DOI 10.1093/brain/awm162
SELTZER B, 1994, J COMP NEUROL, V343, P445, DOI 10.1002/cne.903430308
Seyama J, 2007, PRESENCE-TELEOP VIRT, V16, P337, DOI 10.1162/pres.16.4.337
Shepard RN, 2001, BEHAV BRAIN SCI, V24, P581
Shimada S, 2010, BRAIN COGNITION, V72, P394, DOI 10.1016/j.bandc.2009.11.005
Steckenfinger SA, 2009, P NATL ACAD SCI USA, V106, P18362, DOI
10.1073/pnas.0910063106
Tai YF, 2004, CURR BIOL, V14, P117, DOI 10.1016/j.cub.2004.01.005
Tapus A, 2007, IEEE ROBOT AUTOM MAG, V14, P35, DOI 10.1109/MRA.2007.339605
Thompson R, 2009, NEUROIMAGE, V48, P436, DOI 10.1016/j.neuroimage.2009.06.066
WOLPERT DM, 1995, SCIENCE, V269, P1880, DOI 10.1126/science.7569931
Wolpert DM, 2003, PHILOS T R SOC B, V358, P593, DOI 10.1098/rstb.2002.1238
Xu YD, 2007, J NEUROSCI, V27, P5981, DOI 10.1523/JNEUROSCI.5527-06.2007
Yuille A, 2006, TRENDS COGN SCI, V10, P301, DOI 10.1016/j.tics.2006.05.002
NR 93
TC 112
Z9 112
U1 7
U2 25
PU OXFORD UNIV PRESS
PI OXFORD
PA GREAT CLARENDON ST, OXFORD OX2 6DP, ENGLAND
SN 1749-5016
EI 1749-5024
J9 SOC COGN AFFECT NEUR
JI Soc. Cogn. Affect. Neurosci.
PD APR
PY 2012
VL 7
IS 4
BP 413
EP 422
DI 10.1093/scan/nsr025
PG 10
WC Neurosciences; Psychology; Psychology, Experimental
SC Neurosciences & Neurology; Psychology
GA 926CF
UT WOS:000302808200005
PM 21515639
OA gold
DA 2018-01-22
ER

PT J
AU Kulic, D
Ott, C
Lee, DH
Ishikawa, J
Nakamura, Y
AF Kulic, Dana
Ott, Christian
Lee, Dongheui
Ishikawa, Junichi
Nakamura, Yoshihiko
TI Incremental learning of full body motion primitives and their sequencing
through human motion observation
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE humanoid robots; learning by demonstration; motion primitive learning;
stochastic models
ID IMITATION; ROBOT; RECOGNITION; ADAPTATION; MODEL; TASK
AB In this paper we describe an approach for on-line, incremental learning of full
body motion primitives from observation of human motion. The continuous observation
sequence is first partitioned into motion segments, using stochastic segmentation.
Next, motion segments are incrementally clustered and organized into a hierarchical
tree structure representing the known motion primitives. Motion primitives are
encoded using hidden Markov models, so that the same model can be used for both
motion recognition and motion generation. At the same time, the temporal
relationship between motion primitives is learned via the construction of a motion
primitive graph. The motion primitive graph can then be used to construct motions,
consisting of sequences of motion primitives. The approach is implemented and
tested during on-line observation and on the IRT humanoid robot.
C1 [Kulic, Dana] Univ Waterloo, Dept Elect & Comp Engn, Waterloo, ON N2L 3G1,
Canada.
[Ott, Christian] DLR German Aerosp Ctr, Inst Robot & Mechatron, Wessling,
Germany.
[Lee, Dongheui] Tech Univ Munich, Dept Elect Engn & Informat Technol, Munich,
Germany.
[Ishikawa, Junichi; Nakamura, Yoshihiko] Univ Tokyo, Dept Mechano Informat,
Bunkyo Ku, Tokyo, Japan.
RP Kulic, D (reprint author), Univ Waterloo, Dept Elect & Comp Engn, 200 Univ Ave
W, Waterloo, ON N2L 3G1, Canada.
EM dkulic@ece.uwaterloo.ca
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102; Kulic, Dana/0000-0002-4169-2141
FU Japanese Society for the Promotion of Science [20220001]; Special
Coordination Funds for Promoting Science and Technology, 'IRT Foundation
to Support Man and Aging Society'
FX This work is supported by the Japanese Society for the Promotion of
Science ( Category S Grant-in-Aid for Scientific Research number
20220001). This research is also partially supported by Special
Coordination Funds for Promoting Science and Technology, 'IRT Foundation
to Support Man and Aging Society'.
CR Andry P, 2004, ADAPT BEHAV, V12, P117, DOI 10.1177/105971230401200203
Asfour T., 2006, P IEEE RAS INT C HUM, P40
Billard A, 2004, ROBOT AUTON SYST, V47, P69, DOI 10.1016/j.robot.2004.03.002
Billard AG, 2006, ROBOT AUTON SYST, V54, P370, DOI 10.1016/j.robot.2006.01.007
Breazeal C, 2005, ARTIF LIFE, V11, P31, DOI 10.1162/1064546053278955
Breazeal C, 2002, TRENDS COGN SCI, V6, P481, DOI 10.1016/S1364-6613(02)02016-8
Calinon Sylvain, 2007, P ACM IEEE INT C HUM, P255, DOI DOI
10.1145/1228716.1228751
Calinon S., 2007, P IEEE INT C ROB HUM, P702
Calinon S, 2007, IEEE T SYST MAN CY B, V37, P286, DOI 10.1109/TSMCB.2006.886952
Dillmann R., 1999, P INT S ROB RES, P229
Dominey P. F., 2008, P IEEE INT C HUM ROB, P693
Erlhagen W, 2006, ROBOT AUTON SYST, V54, P353, DOI 10.1016/j.robot.2006.01.004
EZAKI H, 2000, THESIS U TOKYO
Fod A, 2002, AUTON ROBOT, V12, P39, DOI 10.1023/A:1013254724861
Ijspeert AJ, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS I-IV, PROCEEDINGS, P1398, DOI 10.1109/ROBOT.2002.1014739
Ilg W., 2004, INT J HUM ROBOT, V1, P613
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Ito M, 2004, ADAPT BEHAV, V12, P93, DOI 10.1177/105971230401200202
Jain AK, 1999, ACM COMPUT SURV, V31, P264, DOI 10.1145/331499.331504
JANUS B, 2006, THESIS U TOKYO
Janus B., 2005, P IEEE INT C ADV ROB, P411
JENKINS OC, 2004, P 21 INT C MACH LEAR, P441
Jenkins O. C., 2004, INT J HUM ROBOT, V1, P237
Kadone H., 2006, P IEEE RAS INT C HUM, P1
Kadone H., 2005, P 2005 IEEE RSJ INT, P2900
Kajita S., 2005, HUMANOID ROBOT
Kohlmorgen, 2001, ADV NEURAL INFORM PR, P793
Kovar L, 2002, P 29 ANN C COMP GRAP, P473
Kruger V, 2007, ADV ROBOTICS, V21, P1473
Kulic D., 2009, P 18 IEEE INT S ROB, P1210
KULIC D, 2008, P IEEE RJS INT C INT, P2860
Kulic D, 2007, P IEEE INT C INT ROB, P2388
Kulie D, P IEEE RSJ INT C INT, P4300
KULIC D, 2007, P INT S ROB RES, P113
KULIC D, 2007, P IEEE INT S ROB HUM, P1016
Kulic D, 2008, INT J ROBOT RES, V27, P761, DOI 10.1177/0278364908091153
Kulic D, 2008, IEEE INT CONF ROBOT, P2591, DOI 10.1109/ROBOT.2008.4543603
Kulic D, 2009, IEEE T ROBOT, V25, P1158, DOI 10.1109/TRO.2009.2026508
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
Kuniyoshi Y., 1989, P INT S IND ROB, P119
KURIHARA K, 2002, P IEEE INT C ROB AUT, V2, P1241
Lee D., 2006, P IEEE RSJ INT C INT, P4994
LIU S, 1992, J DYN SYST-T ASME, V114, P220, DOI 10.1115/1.2896518
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Nicolescu MN, 2001, IEEE T SYST MAN CY A, V31, P419, DOI 10.1109/3468.952716
NICOLESCU MN, 2005, IMITATION SOCIAL LEA
Nicolescu M, 2003, P 2 INT JOINT C AUT, P241
Ogata T, 2005, ADV ROBOTICS, V19, P651, DOI 10.1163/1568553054255655
Okada M, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1410, DOI 10.1109/ROBOT.2002.1014741
Ott C., 2008, P IEEE RAS INT C HUM, P399
RABINER LR, 1989, P IEEE, V77, P257, DOI 10.1109/5.18626
Schaal S, 2006, ADAPTIVE MOTION OF ANIMALS AND MACHINES, P261, DOI 10.1007/4-
431-31381-8_23
Schaal S, 2003, PHILOS T R SOC B, V358, P537, DOI 10.1098/rstb.2002.1258
SIDENBLADH H, 2002, P EUR C COMP VIS, P784
STARNER T, 1995, P INT WORKSH AUT FAC, P189
Sutton RS, 1998, REINFORCEMENT LEARNI
TAKANO W, 2006, THESIS U TOKYO
TAKANO W, 2005, P IEEE RAS INT C HUM, P167
Takano W, 2006, IEEE INT CONF ROBOT, P3602, DOI 10.1109/ROBOT.2006.1642252
Taylor G. W., 2006, P C NEUR INF PROC SY, P1345
YAMAGUCHI Y, 2008, P JSME C ROB MECH
Yamane K, 2003, IEEE T VIS COMPUT GR, V9, P352, DOI 10.1109/TVCG.2003.1207443
Yamane K, 2009, P ROB SCI SYST 2009
NR 63
TC 49
Z9 51
U1 0
U2 13
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD MAR
PY 2012
VL 31
IS 3
BP 330
EP 345
DI 10.1177/0278364911426178
PG 16
WC Robotics
SC Robotics
GA 900YK
UT WOS:000300928200005
DA 2018-01-22
ER

PT J
AU Kobayashi, F
Kojima, F
Nakamoto, H
Kida, Y
Imamura, N
Shirasawa, H
AF Kobayashi, Futoshi
Kojima, Fumio
Nakamoto, Hiroyuki
Kida, Yasuaki
Imamura, Nobuaki
Shirasawa, Hidenori
TI Slip detection with multi-axis force/torque sensor in universal robot
hand
SO INTERNATIONAL JOURNAL OF APPLIED ELECTROMAGNETICS AND MECHANICS
LA English
DT Article; Proceedings Paper
CT 15th International Symposium on Applied Electromagnetics and Mechanics
(ISEM)
CY SEP 07-09, 2011
CL Naples, ITALY
SP Univ Cassino & Lazio Meridionale, Univ Napoli Federico II, Japan Soc Appl
Electromagent & Mech, Japan Soc Maintenol, Natl Res Council (CNR), SPIN Inst,
Lecroy, Ansys, Infolytica Europe, Cedrat Grp
DE Slip detection; force/toque sensor; robot hand
ID TACTILE
AB A humanoid robot hand receives much attention in various fields. We have
developed the universal robot hand with the multi-axis force/torque sensors. In
order to manipulate an object by the robot hand without dropping the object, it is
important to detect a slip between the object and the robot finger. Therefore, we
propose a slip detection method with the multi-axis force/torque sensor and an
anti-slip control method based on the slip detection. The effectiveness of the
proposed slip detection and the anti-slip control is verified through some
experiments with the universal robot hand.
C1 [Kobayashi, Futoshi] Kobe Univ, Dept Syst Sci, Nada Ku, Kobe, Hyogo 6578501,
Japan.
[Kojima, Fumio] Kobe Univ, Org Adv Sci & Technol, Kobe, Hyogo 6578501, Japan.
[Kida, Yasuaki] BL AUTOTEC LTD, Kobe, Hyogo, Japan.
[Imamura, Nobuaki] Hiroshima Int Univ, Kure, Japan.
[Shirasawa, Hidenori] Adv Mat Proc Inst Kinki Japan, Amagasaki, Hyogo, Japan.
RP Kobayashi, F (reprint author), Kobe Univ, Dept Syst Sci, Nada Ku, Rokkodai Cho,
Kobe, Hyogo 6578501, Japan.
EM futoshi.kobayashi@port.kobe-u.ac.jp
RI Nakamoto, Hiroyuki/M-9357-2016; Kobayashi, Futoshi/M-8890-2016
OI Nakamoto, Hiroyuki/0000-0001-8259-9317; Kobayashi,
Futoshi/0000-0002-4663-6448
CR Choi B. J., 2005, P 2005 IEEE RSJ INT, V1-4, P1977
Fukui W., 2009, P 35 ANN C IEEE IND, P2225
Gunji D, 2008, IEEE INT CONF ROBOT, P2605, DOI 10.1109/ROBOT.2008.4543605
Hollerbach M., 1996, P 1 INT S HUM ROB, P83
Iwata H., 2009, P 2009 IEEE INT S SY, P129
Johnston D., 1996, P 1996 IEEE INT C RO, P661
Kaneko K., 2009, P 2009 IEEE INT C RO, P913
Melchiorri C, 2000, IEEE-ASME T MECH, V5, P235, DOI 10.1109/3516.868914
Mouri T, 2005, 2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-4, P3474, DOI 10.1109/IROS.2005.1545508
Mouri T., 2002, P INT C CONTR AUT SY, P1288
Nakamoto H., 2010, ROBOTICS 2010 CURREN, P123
TAKENAWA S, 2009, P IEEE INT C ROB AUT, P3295
TESHIGAWARA S, 2009, P IEEE INT C ROB AUT, P3289
NR 13
TC 2
Z9 2
U1 0
U2 9
PU IOS PRESS
PI AMSTERDAM
PA NIEUWE HEMWEG 6B, 1013 BG AMSTERDAM, NETHERLANDS
SN 1383-5416
J9 INT J APPL ELECTROM
JI Int. J. Appl. Electromagn. Mech.
PY 2012
VL 39
IS 1-4
BP 1047
EP 1054
DI 10.3233/JAE-2012-1577
PG 8
WC Engineering, Electrical & Electronic; Mechanics; Physics, Applied
SC Engineering; Mechanics; Physics
GA 017PM
UT WOS:000309602700142
DA 2018-01-22
ER

PT J
AU Bouyarmane, K
Kheddar, A
AF Bouyarmane, Karim
Kheddar, Abderrahmane
TI Humanoid Robot Locomotion and Manipulation Step Planning
SO ADVANCED ROBOTICS
LA English
DT Article
DE humanoid robot; legged locomotion; dexterous manipulation; inverse
kinematics; planning algorithm
ID IMPLEMENTATION; ALGORITHM; OBJECTS; FILTER; SPACES
AB We aim at planning multi-contact sequences of stances and postures for humanoid
robots. The output sequence defines the contact transitions that allow our robot to
realize different kind of tasks, ranging from biped locomotion to dexterous
manipulation. The central component of the planning framework is a best-first
algorithm that performs a search of the contacts to be added or removed at each
step, following an input collision-free guide path, and making calls to an
optimization-based inverse kinematics solver under static equilibrium constraints.
The planner can handle systems made of multiple robots and/or manipulated objects
through a centralized multi-agent approach, opening the way for multi-robot
collaborative locomotion and manipulation planning. Results are presented in
virtual environments, with discussion on execution on the real robot HRP-2 in an
example situation. (c) 2012 Taylor & Francis and The Robotics Society of Japan
C1 AIST, CNRS AIST CRT JRL UMI3218, Tsukuba, Ibaraki 3058568, Japan.
Univ Montpellier 2, CNRS, LIRMM UMR5506, F-34095 Montpellier 5, France.
RP Bouyarmane, K (reprint author), ATR Computat Neurosci Lab, 2-2-2 Hikaridai,
Seika, Kyoto 6190288, Japan.
EM karim.bouyarmane@atr.jp
OI Bouyarmane, Karim/0000-0003-4284-0561
FU Japan Society for the Promotion of Science (JSPS) [22300071]; FP7
RoboHow.Cog project
FX This work was partially supported by the Japan Society for the Promotion
of Science (JSPS) Grant-in-Aid for Scientific Research (B), 22300071,
2010, and by the FP7 RoboHow.Cog project www.robohow.eu.
CR Abe Y, 2007, SYMPOSIUM ON COMPUTER ANIMATION 2007: ACM SIGGRAPH/ EUROGRAPHICS
SYMPOSIUM PROCEEDINGS, P249
Benallegue M., 2009, P IEEE INT C ROB AUT, P483
Bouyarmane K., 2010, P 28 ANN C ROB SOC J
Bouyarmane K., 2009, P IEEE INT C ROB AUT, P1165
Bouyarmane K., 2011, P IEEE INT C ROB AUT, P5546
Bouyarmane K., 2010, P IEEE RAS INT C HUM, P8
Bouyarmane K, 2011, IEEE INT C INT ROBOT, P4414, DOI 10.1109/IROS.2011.6048124
Bretl T, 2008, IEEE T ROBOT, V24, P794, DOI 10.1109/TRO.2008.2001360
Bruyninckx H, 1996, MECH MACH THEORY, V31, P135, DOI 10.1016/0094-114X(95)00069-
B
Chestnutt J, 2005, IEEE INT CONF ROBOT, P629
Chestnutt J., 2003, P IEEE RAS INT C HUM
Cortes J, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P2141, DOI 10.1109/ROBOT.2002.1014856
ERDMANN M, 1987, ALGORITHMICA, V2, P477, DOI 10.1007/BF01840371
Escande A., 2008, P INT S EXP ROB ATH, P293
Escande A., 2006, P IEEE RSJ INT C INT, P2974
GILBERT EG, 1988, IEEE T ROBOTIC AUTOM, V4, P193, DOI 10.1109/56.2083
Han L., 2000, P WORKSH ALG FDN ROB, P233
Hauser K., 2005, P IEEE RAS INT C HUM, P7
Hauser K, 2010, INT J ROBOT RES, V29, P897, DOI 10.1177/0278364909352098
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Kavraki LE, 1996, IEEE T ROBOTIC AUTOM, V12, P566, DOI 10.1109/70.508439
Kokkevis E., 1995, Proceedings Graphics Interface '95, P10
Kuffner JJ, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P500, DOI
10.1109/IROS.2001.973406
Kuffner JJ, 2002, AUTON ROBOT, V12, P105, DOI 10.1023/A:1013219111657
Latombe J.-C., 1991, ROBOT MOTION PLANNIN
LaValle SM, 2001, ALGORITHMIC AND COMPUTATIONAL ROBOTICS: NEW DIRECTIONS, P293
LaValle SM, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P1671, DOI 10.1109/ROBOT.1999.770349
LaValle S. M., 2006, PLANNING ALGORITHMS
Lefebvre O, 2005, IEEE INT CONF ROBOT, P4612
Lengagne S., 2010, P IEEE RAS INT C HUM, P14
Lengagne S., 2010, ROBOTS LEARN WALK SE
Simeon T, 2002, IEEE T ROBOTIC AUTOM, V18, P42, DOI 10.1109/70.988973
Spong M, 2005, ROBOT MODELING CONTR
Torkos N, 1998, GRAPHICS INTERFACE '98 - PROCEEDINGS, P151
VANDENBERG J, 2005, P IEEE RSJ INT C INT, P430
Wachter A, 2006, MATH PROGRAM, V106, P25, DOI 10.1007/s10107-004-0559-y
Yamane K, 2003, IEEE T ROBOTIC AUTOM, V19, P421, DOI 10.1109/TRA.2003.810579
NR 37
TC 38
Z9 38
U1 1
U2 5
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2012
VL 26
IS 10
SI SI
BP 1099
EP 1126
DI 10.1080/01691864.2012.686345
PG 28
WC Robotics
SC Robotics
GA 984ZV
UT WOS:000307233400001
DA 2018-01-22
ER

PT J
AU Yoshida, T
Nakadai, K
AF Yoshida, Takami
Nakadai, Kazuhiro
TI Audio-Visual Voice Activity Detection Based on an Utterance State
Transition Model
SO ADVANCED ROBOTICS
LA English
DT Article
DE audio-visual integration; noise robustness; voice activity detection;
lip activity detection; robot audition
ID SPEECH RECOGNITION; ROBOTS; SYSTEM
AB This paper describes improvements in Audio-Visual Voice Activity Detection (AV-
VAD) using a state transition model. Audio-Visual integration is a promising
approach to improve the noise-robustness of VAD. Our proposed AV-VAD is based on a
state transition model with four utterance states: speech, non-speech, and two
asynchronous states, a beginning motion state and an ending motion state,
corresponding to lip activity before and after voice activity, respectively. We
implanted a prototype system into an upper torso humanoid SIG and evaluated the
ability of the proposed method by using auditory- and/or visually-contaminated
data. Experimental results showed that the robustness of the VAD improved even when
the resolution of images was low. On average, our proposed method resulted in a 7.8
point improvement in word detection rates compared with existing AV-VAD methods.
(c) 2012 Taylor & Francis and The Robotics Society of Japan
C1 [Yoshida, Takami; Nakadai, Kazuhiro] Tokyo Inst Technol, Grad Sch Informat Sci &
Engn, Meguro Ku, Tokyo 1528552, Japan.
[Nakadai, Kazuhiro] Honda Res Inst Japan Co Ltd, Wako, Saitama 3510114, Japan.
RP Yoshida, T (reprint author), Tokyo Inst Technol, Grad Sch Informat Sci & Engn,
Meguro Ku, 2-12-1,W8-1, Tokyo 1528552, Japan.
EM yoshida@cyb.mei.titech.ac.jp
FU JSPS Fellows; [22700165]; [19100003]; [22118502]
FX The authors thank Professor Jun-ichi Imura and Professor Tomohisa
Hayakawa of the Tokyo Institute of Technology, Professor Hiroshi G.
Okuno of Kyoto University, and Doctor Rana el Kaliouby and Professor
Rosalind W. Picard of the Massachusetts Institute of Technology. This
work is partially supported by Grants-in-Aid for Scientific Research
(No. 22700165 No. 19100003, and No. 22118502) and JSPS Fellows.
CR Almajai I., 2008, P EUR SIGN PROC C EU, P123
ASANO F, 2003, P IF2003, P386
Fiscus J.G., 1997, IEEE WORKSH AUT SPEE, P347, DOI DOI 10.1109/ASRU.1997.659110
Fujimoto M, 2008, IEICE T INF SYST, VE91D, P467, DOI [10.1093/ietisy/e91-
d.3.467, 10.1093/ietisy/e9l-d.3.467]
KALIOUBY RE, 2004, P IEEE C COMP VIS PA, P154
Koiwa T., 2007, P IEEE RAS INT C INT, P1751
Kuroiwa S, 1999, SPEECH COMMUN, V27, P135, DOI 10.1016/S0167-6393(98)00072-7
LIU P, 2004, P IEEE INT C AC SPEE, P609
Murai K, 2003, IEICE T INF SYST, VE86D, P505
Nakadai K, 2004, SPEECH COMMUN, V44, P97, DOI 10.1016/j.specom.2004.10.010
Nakadai K, 2000, SEVENTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE
(AAAI-2001) / TWELFTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
(IAAI-2000), P832
Nakadai K, 2010, ADV ROBOTICS, V24, P739, DOI 10.1163/016918610X493561
Nemer E, 2001, IEEE T SPEECH AUDI P, V9, P217, DOI 10.1109/89.905996
Okada K., 1998, NATO ASI SER, V163, P186
Potamianos G., 2001, International Journal of Speech Technology, V4, P193, DOI
10.1023/A:1011352422845
RABINER LR, 1975, AT&T TECH J, V54, P297, DOI 10.1002/j.1538-7305.1975.tb02840.x
Sodoyer D, 2006, INT CONF ACOUST SPEE, P601
Tamura S, 2005, INT CONF ACOUST SPEE, P469
Wiskott L, 1997, IEEE T PATTERN ANAL, V19, P775, DOI 10.1109/34.598235
Yamamoto S., 2006, P IEEE RSJ INT C INT, P5333
Yoshida T, 2010, IEEE INT C INT ROBOT, P988, DOI 10.1109/IROS.2010.5651205
NR 21
TC 3
Z9 3
U1 0
U2 2
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2012
VL 26
IS 10
SI SI
BP 1183
EP 1201
DI 10.1080/01691864.2012.687152
PG 19
WC Robotics
SC Robotics
GA 984ZV
UT WOS:000307233400005
DA 2018-01-22
ER

PT J
AU Itohara, T
Otsuka, T
Mizumoto, T
Lim, A
Ogata, T
Okuno, H
AF Itohara, Tatsuhiko
Otsuka, Takuma
Mizumoto, Takeshi
Lim, Angelica
Ogata, Tetsuya
Okuno, Hiroshi G.
TI A multimodal tempo and beat-tracking system based on audiovisual
information from live guitar performances
SO EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING
LA English
DT Article
ID AUDIO; MODEL
AB The aim of this paper is to improve beat-tracking for live guitar performances.
Beat-tracking is a function to estimate musical measurements, for example musical
tempo and phase. This method is critical to achieve a synchronized ensemble
performance such as musical robot accompaniment. Beat-tracking of a live guitar
performance has to deal with three challenges: tempo fluctuation, beat pattern
complexity and environmental noise. To cope with these problems, we devise an
audiovisual integration method for beat-tracking. The auditory beat features are
estimated in terms of tactus (phase) and tempo (period) by Spectro-Temporal Pattern
Matching (STPM), robust against stationary noise. The visual beat features are
estimated by tracking the position of the hand relative to the guitar using optical
flow, mean shift and the Hough transform. Both estimated features are integrated
using a particle filter to aggregate the multimodal information based on a beat
location model and a hand's trajectory model. Experimental results confirm that our
beat-tracking improves the F-measure by 8.9 points on average over the Murata beat-
tracking method, which uses STPM and rule-based beat detection. The results also
show that the system is capable of real-time processing with a suppressed number of
particles while preserving the estimation accuracy. We demonstrate an ensemble with
the humanoid HRP-2 that plays the theremin with a human guitarist.
C1 [Itohara, Tatsuhiko; Otsuka, Takuma; Mizumoto, Takeshi; Lim, Angelica; Ogata,
Tetsuya; Okuno, Hiroshi G.] Kyoto Univ, Grad Sch Informat, Sakyo Ku, Kyoto, Japan.
RP Itohara, T (reprint author), Kyoto Univ, Grad Sch Informat, Sakyo Ku, Kyoto,
Japan.
EM itohara@kuis.kyoto-u.ac.jp
OI Okuno, Hiroshi/0000-0002-8704-4318; Ogata, Tetsuya/0000-0001-7015-0379
FU JSPS; Kyoto University's Global COE
FX This research was supported in part of by a JSPS Grant-in-Aid for
Scientific Research (S) and in part by Kyoto University's Global COE.
CR BALLARD DH, 1981, PATTERN RECOGN, V13, P111, DOI 10.1016/0031-3203(81)90009-1
Cemgil AT, 2002, P INT COMP MUS C, P419
COMANICIU D, 2002, PATTERN ANAL MACHINE, V24, P603, DOI DOI 10.1109/34.1000236
Dixon S, 2000, P 14 EUR C ART INT E, P626
FISCHLER MA, 1981, COMMUN ACM, V24, P381, DOI 10.1145/358669.358692
Fukunaga Keinosuke, 1990, INTRO STAT PATTERN R
Goto M, 2001, J NEW MUSIC RES, V30, P159, DOI 10.1076/jnmr.30.2.159.7114
Hainsworth S, 2003, P IEEE WORKSH APPL S, P91
Ince G, 2011, P IEEE INT C ROB AUT, P3623
Itohara T, HRP 2 FOLLOWS GUITAR
Kalman R. E., 1960, J BASIC ENG, V82, P35, DOI DOI 10.1115/1.3662552
Klapuri AP, 2006, IEEE T AUDIO SPEECH, V14, P342, DOI 10.1109/TSA.2005.854090
Lim A, 2010, IEEE INT C INT ROBOT, P1964, DOI 10.1109/IROS.2010.5650427
LUCAS BD, 1981, P 7 INT JOINT C ART, P674
Miyazaki D, 2003, NINTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS I
AND II, PROCEEDINGS, P982
Mizumoto T., 2010, P IEEE RSJ 2010 WORK, P159
Mizumoto T, 2010, IEEE INT C INT ROBOT, P1957, DOI 10.1109/IROS.2010.5650364
Murata K, 2008, P 8 IEEE RAS INT C H, P79
Nickel K., 2005, P 7 INT C MULT INT I, P61
Otsuka T, 2010, PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL
INTELLIGENCE (AAAI-10), P1238
Pan Y., 2010, P 2010 C NEW INT MUS, P166
Petersen K, 2008, P IEEE RSJ INT C INT, P313
Rosenfeld A., 1982, DIGITAL PICTURE PROC, V1
Rosenfeld A., 1982, DIGITAL PICTURE PROC, V2
Sorenson EH, 1985, KALMAN FILTERING THE
Takeda R., 2007, P IEEE RSJ INT C INT
von Mises R, 1918, PHYS Z, V19, P490
Weinberg G, 2009, P NIME, P70
Whiteley N., 2006, P 7 INT C MUS INF RE, P29
NR 29
TC 3
Z9 3
U1 0
U2 11
PU SPRINGER INTERNATIONAL PUBLISHING AG
PI CHAM
PA GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND
SN 1687-4722
J9 EURASIP J AUDIO SPEE
JI EURASIP J. Audio Speech Music Process.
PY 2012
AR 6
DI 10.1186/1687-4722-2012-6
PG 17
WC Acoustics; Engineering, Electrical & Electronic
SC Acoustics; Engineering
GA 953XP
UT WOS:000304907400001
OA gold
DA 2018-01-22
ER

PT J
AU Hosoda, K
Sekimoto, S
Nishigori, Y
Takamuku, S
Ikemoto, S
AF Hosoda, Koh
Sekimoto, Shunsuke
Nishigori, Yoichi
Takamuku, Shinya
Ikemoto, Shuhei
TI Anthropomorphic Muscular-Skeletal Robotic Upper Limb for Understanding
Embodied Intelligence
SO ADVANCED ROBOTICS
LA English
DT Article
DE Upper limb; anthropomorphic; embodied intelligence; body compliance
ID HUMANOID ROBOT; DYNAMIC TOUCH
AB In this paper, we describe an anthropomorphic muscular-skeletal robotic upper
limb and focus on its soft interaction with the environment. Two experiments are
conducted to demonstrate the ability of the system: object recognition by dynamic
touch and adaptive door opening. The first experiment shows that the compliant
robot is advantageous for categorizing an object by shaking and the second
experiment shows that the human-comparable compliant robot can open a door without
precise control. The robot is expected to have comparable anisotropic compliance to
that of a human, which can be utilized for realization of human-like adaptive
behavior. (C) Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2012
C1 [Hosoda, Koh; Sekimoto, Shunsuke; Nishigori, Yoichi; Takamuku, Shinya; Ikemoto,
Shuhei] Osaka Univ, Grad Sch Informat Sci & Technol, Dept Multimedia Engn, Suita,
Osaka 5650871, Japan.
RP Hosoda, K (reprint author), Osaka Univ, Grad Sch Informat Sci & Technol, Dept
Multimedia Engn, 1-5 Yamadaoka, Suita, Osaka 5650871, Japan.
EM koh.hosoda@ist.osaka-u.ac.jp
CR Albu-Schaffer A, 2008, IEEE ROBOT AUTOM MAG, V15, P20, DOI
10.1109/MRA.2008.927979
Bergquist T., 2009, P IROS 2009 WORKSH S
Caldwell DG, 1998, IEEE INT CONF ROBOT, P3053, DOI 10.1109/ROBOT.1998.680894
Chitta S, 2010, IEEE INT CONF ROBOT, P1799, DOI 10.1109/ROBOT.2010.5509475
Filippini R, 2008, IEEE ROBOT AUTOM MAG, V15, P31, DOI 10.1109/MRA.2008.927696
Knight R., 2006, P AISB06 S BIOL INSP
Hosoda K., 2010, P IEEE RSJ INT C INT, P1236
Iriki A, 2001, NEUROSCI RES, V40, P163, DOI 10.1016/S0168-0102(01)00225-5
Ishiguro H, 2001, IND ROBOT, V28, P498, DOI 10.1108/01439910110410051
Kapandji I. A., 1982, PHYSL JOINTS, V1
Nabeshima C., 2007, P IEEE INT C DEV LEA
Okada K., 2005, P IEEE INT C MECH AU, P1772
Pfeifer R., 2007, BODY SHAPES WAY YOU
Saal HP, 2010, IEEE INT C INT ROBOT, P916, DOI 10.1109/IROS.2010.5649191
Sinapov J., 2010, P 9 IEEE INT C DEV L, P126
SODEYAMA Y, 2008, P INT C INT ROB SYST, P1465
Sugahara A., 2010, J ROBOTICS MECHATRON, V22, P315
Sugimoto S., 2010, IEEE RSJ INT C INT R, P3049
Suzuki M, 2006, ADV ROBOTICS, V20, P233, DOI 10.1163/156855306775525785
Suzuki T., 2008, P IEEE RSJ INT C INT, P846
Takamuku S., 2008, P IEEE RSJ INT C INT, P3212, DOI DOI
10.1I09/IROS.2008.4651175
Takamuku S, 2008, ADV ROBOTICS, V22, P1143, DOI 10.1163/156855308X324820
Tondu B, 2005, INT J ROBOT RES, V24, P257, DOI 10.1177/0278364905052437
Turvey MT, 1996, AM PSYCHOL, V51, P1134, DOI 10.1037/0003-066X.51.11.1134
Van Damme Michael, 2009, International Journal of Robotics Research, V28, P266,
DOI 10.1177/0278364908095842
NR 25
TC 7
Z9 7
U1 0
U2 8
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2012
VL 26
IS 7
BP 729
EP 744
DI 10.1163/156855312X625371
PG 16
WC Robotics
SC Robotics
GA 927PK
UT WOS:000302920200004
DA 2018-01-22
ER

PT J
AU Koike, H
Kida, K
Kanemasu, K
Santos, EC
Rozwadowska, J
Uryu, M
Saruwatari, K
Honda, T
AF Koike, Hitonobu
Kida, Katsuyuki
Kanemasu, Kenji
Santos, Edson Costa
Rozwadowska, Justyna
Uryu, Megumi
Saruwatari, Kenichi
Honda, Takashi
TI Influence of Wear and Thermal Deformation on Machined PEEK Plastic Bush
and Ti Crank Shaft
SO POLYMERS & POLYMER COMPOSITES
LA English
DT Article; Proceedings Paper
CT 1st International Congress on Advanced Materials
CY MAY 13-16, 2011
CL Univ Jinan, Jinan, PEOPLES R CHINA
HO Univ Jinan
DE Rolling contact fatigue; wear; thermal deformation; PEEK; titanium;
robot joints
AB In biped walking humanoid robot, toughness, durability and lightweight of joint
parts are crucial factors due to parts' constant exposure to high torque and loads.
Such ergonomically challenging conditions create the need for joint systems
comprising independent elements capable of keeping the component operable for long
periods of time. In our work, we focused on wear and thermal deformation in two
different grades of both poly-ether-ether-ketone (PEEK) and polyoxymethylene (POM)
plastic bushes. The components used for investigation were bushes typically
employed in speed reduction devices in joint models for biped walking humanoid
robots. In such joint systems, plastic bushes are directly connected to a
crankshaft, playing an important role in the robot's movement ability. In order to
acquire the knowledge of how to build more efficient systems, the influence of the
titanium crankshaft roughness on the frictional heat occurring between the shaft
and the polymer bush as well as the input axis-output axis backlash require close
examination. Based on Rolling Contact Fatigue test, we established the optimal
machining conditions for the crank shafts and bushes. Also, superior to other
tested polymers as far as glass transition temperature, wear toughness and thermal
deformation are concerned, PEEK was found to be the best suiting material for our
investigation.
C1 [Koike, Hitonobu; Kida, Katsuyuki; Santos, Edson Costa; Rozwadowska, Justyna;
Uryu, Megumi; Saruwatari, Kenichi; Honda, Takashi] Kyushu Univ, Nishi Ku, Fukuoka
8190395, Japan.
[Kanemasu, Kenji] Yoshinori Ind, Yodogawa Ku, Osaka 5550032, Japan.
RP Koike, H (reprint author), Kyushu Univ, Nishi Ku, 744 Motooka, Fukuoka 8190395,
Japan.
EM hitonobu.koike@gmail.com; kida@mech.kyushu-u.ac.jp;
kanemasu@yosinori.co.jp; edson.costasantos@gmail.com;
j.rozwadowska@mech.kyushu-u.ac.jp; saruwatari@mech.kyushu-u.ac.jp;
m.uryu@mech.kyushu-u.ac.jp; 3TE10088K@s.kyushu-u.ac.jp
RI Honda, Takashi/G-2370-2011; Kida, Katsuyuki/G-2315-2011; Uryu,
Megumi/E-3896-2012; Koike, Hitonobu/G-3699-2011
OI Kida, Katsuyuki/0000-0002-9801-5205; Uryu, Megumi/0000-0002-2991-504X;
Koike, Hitonobu/0000-0002-1307-7045
CR Akagaki T., 2006, TRIBOLOGIST, V52, P126
Gotouda K, 2008, JSME, Vl4, P37
Hamamatsu H., 2009, VIBRATION SUPPRESSIO, P55
Haraguchi R., 2005, JSME, p[71, 219]
Honda T., 2008, JSMS, V52, P428
Honda T., 2009, P JAST TRIB C TOK
Koike H., 2010, ADV MAT RES, V1288, P154
Kominami T., 2007, JSME, P609
Nagamura K., 2007, 123 JSME, V07-15, P103
Nagashima H., 2006, JSME, P9
Oguri T, 2006, JSME, V602, P145
Seto K., 2001, JSME, V018-1, P191
Shirokoshi N., 2000, JSME, V66, P646
STOLARSKI TA, 1992, WEAR, V158, P71, DOI 10.1016/0043-1648(92)90031-3
Tanaka E., 2008, JSME, V08-12, P221
Tsukamoto T., 1997, JSME, V63-611, P96
Yamada Y., 2004, TSJ, P11
Yamada Y, 2007, J JPN SOC TRIBOLOGIS, V52, P198
Yamamoto Y., 2004, WEAR, V257, P184
NR 19
TC 0
Z9 0
U1 1
U2 2
PU ISMITHERS
PI SHROPSHIRE
PA SHAWBURY, SHREWSBURY, SHROPSHIRE, SY4 4NR, ENGLAND
SN 0967-3911
J9 POLYM POLYM COMPOS
JI Polym. Polym. Compos.
PY 2012
VL 20
IS 1-2
BP 117
EP 122
PG 6
WC Materials Science, Characterization & Testing; Materials Science,
Composites; Polymer Science
SC Materials Science; Polymer Science
GA 895YX
UT WOS:000300533700024
DA 2018-01-22
ER

PT J
AU Fukui, K
Ishikawa, Y
Shintaku, E
Honda, M
Takanishi, A
AF Fukui, Kotaro
Ishikawa, Yuma
Shintaku, Eiji
Honda, Masaaki
Takanishi, Atsuo
TI Production of Various Vocal Cord Vibrations Using a Mechanical Model for
an Anthropomorphic Talking Robot
SO ADVANCED ROBOTICS
LA English
DT Article
DE Humanoid robot; speech production; talking robot; voice quality; vocal
cords
AB We developed a three-dimensional mechanical vocal cord model for Waseda Talker
No. 7 (WT-7), an anthropomorphic talking robot, for generating speech sounds with
various voice qualities. The vocal cord model is a cover model that has two thin
folds made of thermoplastic material. The model self-oscillates by airflow
exhausted from the lung model and generates the glottal sound source, which is fed
into the vocal tract for generating the speech sound. Using the vocal cord model,
breathy and creaky voices, as well as the modal (normal) voice, were produced in a
manner similar to the human laryngeal control. The breathy voice is characterized
by a noisy component mixed with the periodic glottal sound source and the creaky
voice is characterized by an extremely low-pitch vibration. The breathy voice was
produced by adjusting the glottal opening and generating the turbulence noise by
the airflow just above the glottis. The creaky voice was produced by adjusting the
vocal cord tension, the sub-glottal pressure and the vibration mass so as to
generate a double-pitch vibration with a long pitch interval. The vocal cord model
used to produce these voice qualities was evaluated in terms of the vibration
pattern as measured by a high-speed camera, the glottal airflow and the acoustic
characteristics of the glottal sound source, as compared to the data for a human.
(C) Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2012
C1 [Honda, Masaaki] Waseda Univ, Dept Sport Sci, Tokorozawa, Saitama, Japan.
[Fukui, Kotaro; Ishikawa, Yuma; Shintaku, Eiji; Takanishi, Atsuo] Waseda Univ,
Dept Modern Mech Engn, Shinjuku Ku, TWIns, Tokyo, Japan.
RP Honda, M (reprint author), Waseda Univ, Dept Sport Sci, 415,2-579-15 Mikajima,
Tokorozawa, Saitama, Japan.
EM hon@waseda.jp
FU MEXT, Japan [19300063]
FX This work was supported in part by a Grant-in-Aid for Scientific
Research (A) 19300063 from MEXT, Japan. The authors would like to thank
Solid Works KK for providing the CAD and FEM software, Kuraray Co., Ltd
for providing the Septon and advising us on its use, and, finally, the
members of the ATR BioPhysical Imaging Project for their advice on the
biology of the human speech mechanism.
CR Flanagan J., 1972, SPEECH ANAL SYNTHESI
FOURCIN AJ, 2000, VOICE QUALITY MEASUR, P285
Fukui K., 2010, J PHONETIC SOC JAPAN, V14, P57
FUKUI K, 2005, P 2005 IEEE INT C RO, P1449
FUKUI K, 2005, P 2005 IEEE RSJ INT, P272
Fukui K, 2007, IEEE INT CONF ROBOT, P2922, DOI 10.1109/ROBOT.2007.363915
Kakita Y, 1981, VOCAL FOLD PHYSL, P25
LAVER John, 1980, PHONETIC DESCRIPTION
ROTHENBERG M, 1973, J ACOUST SOC AM, V53, P1632, DOI 10.1121/1.1913513
SAWADA H, 2004, P 2004 IEEE RSJ INT, P1920
Titze I. R., 1994, PRINCIPLES VOICE PRO
Umeda N, 1966, J ACOUSTICAL SOC JAP, V22, P195
NR 12
TC 0
Z9 0
U1 0
U2 4
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2012
VL 26
IS 1-2
BP 105
EP 120
DI 10.1163/016918611X607392
PG 16
WC Robotics
SC Robotics
GA 888XI
UT WOS:000300037500006
DA 2018-01-22
ER

PT J
AU Minamiyama, F
Koga, H
Kobayashi, K
Katayama, M
AF Minamiyama, Fumikazu
Koga, Hidetsugu
Kobayashi, Kentaro
Katayama, Masaaki
TI Power Supply Overlaid Communication with Common Clock Delivery for
Cooperative Motion Control
SO IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND
COMPUTER SCIENCES
LA English
DT Article
DE motion control; synchronous motion; power line communication (PLC);
common clock delivery
AB For the control of multiple servomotors in a humanoid robots, a communication
system is proposed. In the system, DC electric power, command/response signals and
a common clock signal for precise synchronous movement of the servomotors are
transmitted via the same wiring with a multi-drop bus. Because of the bandwidth
limitation, the common clock signal and the command/response signals overlap each
other. It is confirmed that the coexistence of both signals is possible by using
interference cancellation at the reception of command/response signals.
C1 [Minamiyama, Fumikazu] Nagoya Univ, Grad Sch Engn, Dept Elect Engn & Comp Sci,
Nagoya, Aichi 4648603, Japan.
[Koga, Hidetsugu] YASKAWA Elect Corp, Kitakyushu, Fukuoka 8038530, Japan.
[Kobayashi, Kentaro; Katayama, Masaaki] Nagoya Univ, EcoTopia Sci Inst, Nagoya,
Aichi 4648603, Japan.
RP Minamiyama, F (reprint author), Nagoya Univ, Grad Sch Engn, Dept Elect Engn &
Comp Sci, Nagoya, Aichi 4648603, Japan.
EM minami@katayama.nuee.nagoya-u.ac.jp
RI Katayama, Masaaki/F-3755-2010
CR Minamiyama F., 2011, 15 IEEE INT S POW LI, P370
Minamiyama F., 2010, WBS201012 IEICE
Minamiyama F., 2010, RRRC201021 IEICE
TANTARATANA S, 1995, IEEE T COMMUN, V43, P1738, DOI 10.1109/26.380224
NR 4
TC 0
Z9 0
U1 0
U2 3
PU IEICE-INST ELECTRONICS INFORMATION COMMUNICATIONS ENG
PI TOKYO
PA KIKAI-SHINKO-KAIKAN BLDG, 3-5-8, SHIBA-KOEN, MINATO-KU, TOKYO, 105-0011,
JAPAN
SN 0916-8508
EI 1745-1337
J9 IEICE T FUND ELECTR
JI IEICE Trans. Fundam. Electron. Commun. Comput. Sci.
PD DEC
PY 2011
VL E94A
IS 12
BP 2773
EP 2775
DI 10.1587/transfun.E94.A.2773
PG 3
WC Computer Science, Hardware & Architecture; Computer Science, Information
Systems; Engineering, Electrical & Electronic
SC Computer Science; Engineering
GA 865IO
UT WOS:000298304800035
DA 2018-01-22
ER

PT J
AU Corke, PI
Inoue, H
AF Corke, Peter I.
Inoue, Hirochika
TI Humanoid Robots
SO IEEE ROBOTICS & AUTOMATION MAGAZINE
LA English
DT Editorial Material
C1 [Inoue, Hirochika] Univ Tokyo, Tokyo 1138654, Japan.
RI Corke, Peter/C-6770-2009
OI Corke, Peter/0000-0001-6650-367X
NR 0
TC 0
Z9 0
U1 0
U2 7
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1070-9932
J9 IEEE ROBOT AUTOM MAG
JI IEEE Robot. Autom. Mag.
PD DEC
PY 2011
VL 18
IS 4
BP 112
EP U99
DI 10.1109/MRA.2011.943247
PG 2
WC Automation & Control Systems; Robotics
SC Automation & Control Systems; Robotics
GA 861BT
UT WOS:000297994600020
DA 2018-01-22
ER

PT J
AU Lengagne, S
Ramdani, N
Fraisse, P
AF Lengagne, Sebastien
Ramdani, Nacim
Fraisse, Philippe
TI Planning and Fast Replanning Safe Motions for Humanoid Robots
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Discretization; feasible subset; humanoid robots; inequality constraint;
interval analysis
ID IMPLEMENTATION; MANIPULATORS; MOVEMENT; FILTER
AB This paper introduces effective numerical methods for the planning and fast
replanning of safe motions to ensure the safety, balance, and integrity of humanoid
robots over the whole motion duration. Our safe methods do not depend on, nor are
connected to, any type of modeling or constraints. To plan safe motions, certain
constraints have to be satisfied over a continuous interval of time. Classical
methods revert to time-grid discretization, which can be risky for the robot. We
introduce a hybrid method to plan safe motions, which combines a classical unsafe
method with a verification step that checks constraint violation and computes
excess by the usage of interval analysis. When the robot meets unexpected
situations, it has to replan a new motion, which is often too time consuming.
Hence, we introduce a new method to rapidly replan safe motions, i.e., in less than
2 s CPU time. It computes offline feasible subsets in the vicinity of safe motions
and finds online a solution in these subsets without actually recomputing the
nonlinear constraints. Our methods are validated by the use the HOAP-3 robot, where
the motions are run with no balance controller.
C1 [Lengagne, Sebastien; Ramdani, Nacim; Fraisse, Philippe] Univ Montpellier 2,
CNRS, LIRMM, UMR 5506, F-34392 Montpellier, France.
[Lengagne, Sebastien] INRIA Sophia Antipolis Mediterranee, DEMAR Project Team,
F-6300 Nice, France.
[Lengagne, Sebastien] CNRS AIST JRL, Tsukuba, Ibaraki 3058568, Japan.
[Ramdani, Nacim] Univ Orleans, PRISME, F-18020 Bourges, France.
RP Lengagne, S (reprint author), Univ Montpellier 2, CNRS, LIRMM, UMR 5506, F-34392
Montpellier, France.
EM sebastien.lengagne@aist.go.jp; nacim.ramdani@bourges.univ-orleans.fr
OI Ramdani, Nacim/0000-0003-1491-3751
CR BEHNKE S, 2006, P 6 IEEE RAS INT C H, P497
CARPIN S, 2006, P WORKSH HUM SOCC RO, P71
De Boor C., 1978, PRATICAL GUIDE SPLIN, V27
EVRARD P, 2009, P 2009 IEEE RSJ INT, P5635
HETTICH R, 1993, SIAM REV, V35, P380, DOI 10.1137/1035089
Jaulin L., 2001, APPL INTERVAL ANAL
KAGAMI S, 2001, P ALG PERSP WORKSH A, P329
Kajita S., 2003, P IEEE INT C ROB AUT, V2, P1620
Kajita S., 2003, P IEEE RSJ INT C INT, V2, P1644
Kuffner J. J., 2001, P 2001 IEEE RSJ INT, V1, P500
Lawrence C., 1997, USERS GUIDE CFSQP VE
Lee SH, 2005, IEEE T ROBOT, V21, P657, DOI 10.1109/TRO.2004.842336
Lee SH, 2007, IEEE INT CONF ROBOT, P4667, DOI 10.1109/ROBOT.2007.364198
LENGAGNE S, 2009, P IEEE RSJ INT C INT, P441
LENGAGNE S, 2009, P EEE INT C ROB AUT, P1669
LENGAGNE S, 2007, P IEEE RAS 7 INT C H, P312
Miossec S., 2006, P 2006 IEEE INT C RO, P299
Moore R. E., 1966, INTERVAL ANAL
Nishiwaki K, 2001, IEEE INT CONF ROBOT, P4110, DOI 10.1109/ROBOT.2001.933260
Piazzi A, 2000, IEEE T IND ELECTRON, V47, P140, DOI 10.1109/41.824136
Piazzi A, 1998, INT J CONTROL, V71, P631, DOI 10.1080/002071798221713
RAMDANI N, 2008, P IEEE RSJ INT C INT, P2410
Reemtsen R, 1998, SEMIINFINITE PROGRAM
Reemtsen R., 1998, NONCONVEX OPTIMIZATI
Stasse O., 2009, P IEEE RAS INT C HUM, P284
SULEIMAN W, 2007, P IEEE RAS 7 INT C H, P180
Tak S, 2000, COMPUT GRAPH FORUM, V19, pC437
TSUJITA T, P IEEE ASME INT C AD, P1024
UNO Y, 1989, BIOL CYBERN, V61, P89
von Stryk O., 1992, Annals of Operations Research, V37, P357, DOI
10.1007/BF02071065
VONSTRYK O, 1993, INT S NUM M, P129
VUKOBRAT.M, 1969, IEEE T BIO-MED ENG, VBM16, P1, DOI 10.1109/TBME.1969.4502596
Wachter A, 2006, MATH PROGRAM, V106, P25, DOI 10.1007/s10107-004-0559-y
Yamane K, 2003, IEEE T ROBOTIC AUTOM, V19, P421, DOI 10.1109/TRA.2003.810579
YOSHIDA E, 2005, P 5 IEEE RAS INT C H
NR 35
TC 11
Z9 11
U1 0
U2 6
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD DEC
PY 2011
VL 27
IS 6
BP 1095
EP 1106
DI 10.1109/TRO.2011.2162998
PG 12
WC Robotics
SC Robotics
GA 858RW
UT WOS:000297821400006
DA 2018-01-22
ER

PT J
AU Koeda, M
Ito, T
Yoshikawa, T
AF Koeda, Masanao
Ito, Toshitatsu
Yoshikawa, Tsuneo
TI Shuffle turning in humanoid robots through load distribution control of
the soles
SO ROBOTICA
LA English
DT Article
DE Humanoid robot; Shuffle turn; Slip; Load distribution; Sole
AB This paper proposes a novel shuffle turning method for a humanoid robot that
controls the load distribution of the soles of the robot's feet. Turning motions of
a humanoid robot are conventionally performed through a repeated foot stepping
motion. However, this motion is inefficient and time-consuming. In our method, the
feet are slid along the floor without a stepping movement. In order to reduce the
friction with the floor and to achieve the correct shuffle turning motion, a non-
uniform load distribution of the soles is controlled. Experiments using a humanoid
robot were conducted on two floors with differing friction amounts, and the
validity of the proposed method was verified.
C1 [Koeda, Masanao] Osaka Electrocommun Univ, Fac Informat Sci & Arts, Dept Comp
Sci, Osaka 5750063, Japan.
[Ito, Toshitatsu; Yoshikawa, Tsuneo] Ritsumeikan Univ, Coll Informat Sci & Engn,
Dept Human & Comp Intelligence, Kusatsu, Shiga 5258577, Japan.
RP Koeda, M (reprint author), Osaka Electrocommun Univ, Fac Informat Sci & Arts,
Dept Comp Sci, Kiyotaki 1130-70, Osaka 5750063, Japan.
EM koeda@isc.osakac.ac.jp
FU CLAWAR Association
FX This paper was originally submitted under the auspices of the CLAWAR
Association. It is an extension of work presented at CLAWAR 2009: The
12th International Conference on Climbing and Walking Robots and the
Support Technologies for Mobile Machines, Istanbul, Turkey.
CR Harada K., 2007, P IEEE RSJ INT C INT, P4227
Hashimoto K., 2010, P 2010 JSME C ROB ME
Kajita S, 2004, P IEEE RSJ INT C INT, P3546
Kaneko K., 2005, P IEEE RSJ INT C INT, P1457
Koeda M., 2007, 25 ANN C ROB SOC JAP
Miura K., 2010, P 2010 IEEE INT C RO, P4249
Miura K., 2008, P 8 IEEE RAS INT C H, P279
Nishiwaki K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2684, DOI 10.1109/IRDS.2002.1041675
Ogura Y, 2005, P 2005 IEEE INT C RO, P605
Park J. H., 2001, P IEEE INT C ROB AUT, P4134
Silva FM, 2001, IEEE INT CONF ROBOT, P4122, DOI 10.1109/ROBOT.2001.933262
Takemura H, 2005, ROBOT AUTON SYST, V53, P124, DOI 10.1016/j.robot.2005.07.002
Nishikawa M., 2005, Japanese Patent Application, Patent No. [2005-238407,
2005238407]
NR 13
TC 2
Z9 2
U1 0
U2 2
PU CAMBRIDGE UNIV PRESS
PI NEW YORK
PA 32 AVENUE OF THE AMERICAS, NEW YORK, NY 10013-2473 USA
SN 0263-5747
EI 1469-8668
J9 ROBOTICA
JI Robotica
PD DEC
PY 2011
VL 29
BP 1017
EP 1024
DI 10.1017/S0263574711000269
PN 7
PG 8
WC Robotics
SC Robotics
GA 854AX
UT WOS:000297468000007
DA 2018-01-22
ER

PT J
AU Konno, A
Myojin, T
Matsumoto, T
Tsujita, T
Uchiyama, M
AF Konno, Atsushi
Myojin, Tomoya
Matsumoto, Takaaki
Tsujita, Teppei
Uchiyama, Masaru
TI An impact dynamics model and sequential optimization to generate impact
motions for a humanoid robot
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE Impact dynamics; humanoid robot
ID MANIPULATORS
AB When a human needs to generate a large force, they will try to apply an
impulsive force with dynamic cooperation of the whole body. In this paper we first
discuss impact dynamics of humanoid robots and then propose a way to generate
impact motions for a humanoid robot to exert a large force while keeping a balance.
In the impact motion generation, Sequential Quadratic Programming (SQP) is used to
solve a non-linear programming problem in which an objective function and
constraints may be non-linear functions of the motion parameters. Impact motions
are generated using SQP so that the impact force is maximized while the angular
momentum is minimized. Breaking wooden boards with a Karate chop is taken as a case
study because it is a typical example of tasks that utilize impulsive force. A
humanoid robot motion for the Karate chop is generated by the proposed method. In
order to validate the designed motion, experiments are carried out using a small
humanoid robot Fujitsu HOAP-2. The Karate-chop motion generated by the proposed
method is compared with the motion designed by a human. The results of breaking the
wooden boards experiments clearly show the effectiveness of the proposed method.
C1 [Konno, Atsushi] Tohoku Univ, Dept Aerosp Engn, Aoba Ku, Sendai, Miyagi 9808579,
Japan.
RP Konno, A (reprint author), Tohoku Univ, Dept Aerosp Engn, Aoba Ku, 6-6-01
Aramaki Aza Aoba, Sendai, Miyagi 9808579, Japan.
EM konno@space.mech.tohoku.ac.jp
FU NEDO [05A30703a]; JSPS [21300073]
FX This work was supported by the NEDO Industrial Technology Research Grant
Program (project ID 05A30703a) and by a JSPS Grant-in-Aid for Scientific
Research (B) (grant number 21300073).
CR Arisumi H, 2007, IEEE INT CONF ROBOT, P2661, DOI 10.1109/ROBOT.2007.363867
Asada H., 1987, Proceedings of the 1987 IEEE International Conference on
Robotics and Automation (Cat. No.87CH2413-3), P751
Bobrow JE, 2001, J ROBOTIC SYST, V18, P785, DOI 10.1002/rob.8116
Bowling A, 2005, IEEE T ROBOT, V21, P115, DOI 10.1109/TRO.2004.837243
Brach RM, 1991, MECH IMPACT DYNAMICS
Goswami A, 1999, INT J ROBOT RES, V18, P523, DOI 10.1177/02783649922066376
HARADA K, 2003, P IEEE INT C ROB AUT, P1627
HWANG Y, 2003, P IEEE RSJ INT C INT, P1901
Izumi T., 1993, J RSJ, V11, P436
KITAGAKI K, 1992, 1992 IEEE INTERNATIONAL CONF ON ROBOTICS AND AUTOMATION :
PROCEEDINGS, VOLS 1-3, P1928, DOI 10.1109/ROBOT.1992.219947
Konno A, 2007, HUMANOID ROBOTS NEW, P521
KONNO A, 2005, P IEEE RSJ INT C INT, P1788
Mandal N, 1995, INT J ROBOT RES, V12, P67
MATSUMOTO T, 2006, P IEEE RSJ INT C INT, P5919, DOI DOI 10.1109/IROS.2006.282473
MILLS JK, 1993, INT J ROBOT RES, V12, P146, DOI 10.1177/027836499301200204
Nagata K, 1990, J SICE, V26, P435
Nenchev DN, 1998, IEEE INT CONF ROBOT, P913, DOI 10.1109/ROBOT.1998.677104
Nenchev DN, 2008, ROBOTICA, V26, P643, DOI 10.1017/S0263574708004268
PAGILLA PR, 1994, IEEE-ASME T MECH, V9, P123
So B. R., 2004, P 2004 IEEE RSJ INT, P1972
Tagawa T, 2003, P IEEE INT C ROB AUT, P2031
Takase K., 1990, Journal of the Society of Instrument and Control Engineers,
V29, P213
Tarn TJ, 1996, IEEE CONTR SYST MAG, V16, P32, DOI 10.1109/37.482135
Uchiyama M, 1975, CONTROL ALGORITHM CO
VOLPE R, 1993, INT J ROBOT RES, V12, P351, DOI 10.1177/027836499301200403
Vukobratovic M., 1975, LEGGED LOCOMOTION RO
Vukobratovic M., 2001, P IEEE RAS INT C HUM, P237
WALKER ID, 1994, IEEE T ROBOTIC AUTOM, V10, P670, DOI 10.1109/70.326571
Yoshida K, 2003, INT J ROBOT RES, V22, P321, DOI 10.1177/0278364903022005003
YOSHIDA K, 1995, IEEE INT CONF ROBOT, P1271
YOSHIDA K, 1992, 1992 IEEE INTERNATIONAL CONF ON ROBOTICS AND AUTOMATION :
PROCEEDINGS, VOLS 1-3, P899, DOI 10.1109/ROBOT.1992.220182
YOSHIKAWA T, 1985, J ROBOTIC SYST, V2, P113
ZHENG Y, 1985, J ROBOTIC SYST, V2, P289, DOI 10.1002/rob.4620020307
NR 33
TC 11
Z9 11
U1 0
U2 4
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD NOV
PY 2011
VL 30
IS 13
BP 1596
EP 1608
DI 10.1177/0278364911405870
PG 13
WC Robotics
SC Robotics
GA 837MB
UT WOS:000296206800005
DA 2018-01-22
ER

PT J
AU Fujimoto, I
Matsumoto, T
De Silva, PRS
Kobayashi, M
Higashi, M
AF Fujimoto, Isao
Matsumoto, Tohru
De Silva, P. Ravindra S.
Kobayashi, Masakazu
Higashi, Masatake
TI Mimicking and Evaluating Human Motion to Improve the Imitation Skill of
Children with Autism Through a Robot
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Robot mimicry; Motion planning; Imitation skill; Children with autism;
Humanoid robot; Human-robot interaction
AB In this paper, we report techniques for mimicking and evaluating the human
motion in real time by a therapeutic humanoid robot to improve the imitation skill
of children with autism. For realizing the mimicking technique, we propose a method
of selecting key frames using a Q-Learning approach to remove the significant
noises. Then, in order to evaluate human motion in real time, we introduce a method
of cluster-based framework of Mixture Gaussian and an Expectation-Maximization
algorithm using parameters which are converted by Principal Component Analysis.
Practical experiments have been performed to test the interaction of children with
autism with the robot and evaluate the possibility of improving their imitation
skills by training them to perform specific tasks through a robot.
RP Fujimoto, I (reprint author), 2-12-1 Hisakata,Tempaku, Nagoya, Aichi 4688511,
Japan.
EM fujimoto@toyota-ti.ac.jp; ravi@icd.tutkie.tut.ac.jp
FU Ministry of Education, Science, Sports and Culture of Japan [S0801058]
FX The authors would like to thank Mr. Hideaki Naito, principal of Tempaku
School for the Disabled in Nagoya, for especially cooperating with our
experiments. This research was partly supported by the Grant-in-Aid for
Sustainable Research Center (S0801058) of the Ministry of Education,
Science, Sports and Culture of Japan.
CR Andry P., 2002, IEEE T SYST MAN CY A, P431
Bouman CA, 2005, CLUSTER UNSUPERVISED, P14
De Silva R. P., 2009, P 2009 IEEE RSJ INT, P694
Feil-Seifer D., 2008, 17 IEEE INT S ROB HU
Kozima H, 2009, INT J SOC ROBOT, V1, P3, DOI 10.1007/s12369-008-0009-8
Michaud F, 2005, IEEE T SYST MAN CY A, V35, P471, DOI 10.1109/TSMCA.2005.850596
Nadel J., 1999, IMITATION INFANCY, P209
Riley M, 2003, IEEE INT CONF ROBOT, P2368, DOI 10.1109/ROBOT.2003.1241947
Robins Ben, 2006, INTERACTION STUDIES, V7, P509, DOI DOI 10.1075/IS.7.3.16R0B
Robins B., 2009, 2 INT C ADV COMP HUM
Shiratori T, 2006, IEEE INT CONF ROBOT, P3654, DOI 10.1109/ROBOT.2006.1642260
Wing L., 1986, AUTISTIC SPECTRUM
Yang HD, 2007, IEEE T ROBOT, V23, P256, DOI 10.1109/TRO.2006.889491
NR 13
TC 21
Z9 21
U1 1
U2 10
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD NOV
PY 2011
VL 3
IS 4
BP 349
EP 357
DI 10.1007/s12369-011-0116-9
PG 9
WC Robotics
SC Robotics
GA V31OY
UT WOS:000208894100003
DA 2018-01-22
ER

PT J
AU Ishiguro, H
Minato, T
Yoshikawa, Y
Asada, M
AF Ishiguro, Hiroshi
Minato, Takashi
Yoshikawa, Yuichiro
Asada, Minoru
TI HUMANOID PLATFORMS FOR COGNITIVE DEVELOPMENTAL ROBOTICS
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Cognitive developmental robotics; platform development; Human-robot
interaction
ID IMITATION
AB One of the most promising approaches to understand human cognitive and
developmental mechanisms is a synthetic approach using humanoid robots; an approach
to understand the human cognitive functions by realizing them with the robots.
Humans are so complicated and it is difficult to mimic the well-developed human by
robotic technologies. Therefore, it is necessary to understand how humans develop
the complicated functions during the developmental process. We may be able to
develop infant functions and make them evolve by tracing the human developmental
process. This new study requires robot platforms that can mimic various aspects of
the human developmental process. This paper introduces a series of robot platforms
that we have developed for the studies with the synthetic approach.
C1 [Ishiguro, Hiroshi] Osaka Univ, Grad Sch Engn Sci, Toyonaka, Osaka 5608531,
Japan.
[Minato, Takashi; Yoshikawa, Yuichiro] Japan Sci & Technol Agcy, ERATO, Asada
Project, Suita, Osaka 5650871, Japan.
[Asada, Minoru] Osaka Univ, Grad Sch Engn, Suita, Osaka 5650871, Japan.
RP Ishiguro, H (reprint author), Osaka Univ, Grad Sch Engn Sci, 1-3 Machikaneyama,
Toyonaka, Osaka 5608531, Japan.
EM ishiguro@sys.es.osaka-u.ac.jp; minato@atr.jp;
yoshikawa@sys.es.osaka-u.ac.jp; asada@ams.eng.osaka-u.ac.jp
CR Asada M, 2009, IEEE T AUTON MENT DE, V1, P12, DOI 10.1109/TAMD.2009.2021702
Ben Amor H., 2007, INT C AD NAT COMP AL
Bluethmann W., 2004, IEEE RAS INT C HUM R, P402
Breazeal C, 2000, ADAPT BEHAV, V8, P49, DOI 10.1177/105971230000800104
Breazeal C., 2004, INT J C AUT AG MULT, P1030
Breazeal C., 2008, ACM SIGGRAPH 2008 NE
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
DallaLibera F., 2008, ROBOTICS AUTONOMOUS, V57, P846
DiSalvo C. F., 2002, C DES INT SYST PROC, P321
EDSINGERGONZALE.A, 2004, IEEE RAS INT C HUM R, P273
Ekman P., 1978, FACIAL ACTION CODING
Fujita M, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P960
HACKEL M, 2005, IEEE EDM ALB CAN IEE, P56
Hashimoto T., 2006, SICE ICASE INT JOINT, P5423
Hashimoto T, 2008, RECENT ADV MODELLING, P111
Hayashi M., 2007, IEEE RSJ INT C INT R, P3610
Heider F, 1958, PSYCHOL INTERPERSONA
Ikemoto Shuhei, 2008, Applied Bionics and Biomechanics, V5, P213, DOI
10.1080/11762320902808143
Ishiguro H, 2001, IND ROBOT, V28, P498, DOI 10.1108/01439910110410051
Ishiguro H., 2005, INT S ROB RES
Jaeckel P, 2008, ROBOT AUTON SYST, V56, P1042, DOI 10.1016/j.robot.2008.09.002
Kagami S., 2001, INT C HUMANOID ROBOT, P253
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
KOZIMA H, 2002, SOCIALLY INTELLIGENT, P157
MacDorman K. F., 2005, COGSCI 2005 WORKSH S, P106, DOI DOI
10.1016/J.CHB.2012.11.021
Meltzoff AN, 1997, EARLY DEV PARENTING, V6, P179, DOI 10.1002/(SICI)1099-
0917(199709/12)6:3/4<179::AID-EDP157>3.0.CO;2-R
METTA G, 2001, IEEE RAS INT C HUM R, P33
Minato T, 2004, LECT NOTES COMPUT SC, V3029, P424
Minato T., 2009, WORKSH SYN INT APPR
Minato T., 2007, IEEE RAS INT C HUM R, P557
Miura K, 2007, ADV ROBOTICS, V21, P1583
Miyashita T., 2005, INT S ROB RES
Mizuuchi I., 2007, IEEE RAS INT C HUM R
Noda T., 2010, ANN M JAP NEUR SOC
ODASHIMA T, 2006, VID P IEEE RSJ INT C
Ohmura Y, 2006, IEEE INT CONF ROBOT, P1348, DOI 10.1109/ROBOT.2006.1641896
Park I.-W., 2005, IEEE RAS INT C HUM R, P321
Sakagami Y., 2002, IEEE RSJ INT C INT R, P2478, DOI DOI
10.1109/IRDS.2002.1041641
Sakamoto D., 2007, ACM IEEE INT C HUM R
Scassellati B, 2002, AUTON ROBOT, V12, P13, DOI 10.1023/A:1013298507114
Shimada M., 2009, IEEE INT S ROB HUM I, P1119
Stiehl WD, 2005, IEEE INT WORKSH ROB, P408
Sugaiwa T., 2008, IEEE RAS INT C HUM R, P481
Sugiyama O., 2009, INT C SOC ROB, P90
Sun G., 2004, IEEE RAS INT C HUM R
VERNON D., 2007, IEEE INT C DEV LEARN
Yoshikawa Y., 2009, WORKSH SYN INT APPR
Zecca M., 2008, IEEE RAS INT C HUM R, P487
NR 48
TC 7
Z9 7
U1 2
U2 5
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD SEP
PY 2011
VL 8
IS 3
SI SI
BP 391
EP 418
DI 10.1142/S0219843611002514
PG 28
WC Robotics
SC Robotics
GA 861FY
UT WOS:000298006000001
DA 2018-01-22
ER

PT J
AU Ayaz, Y
Konno, A
Munawar, K
Tsujita, T
Komizunai, S
Uchiyama, M
AF Ayaz, Yasar
Konno, Atsushi
Munawar, Khalid
Tsujita, Teppei
Komizunai, Shunsuke
Uchiyama, Masaru
TI A Human-Like Approach Towards Humanoid Robot Footstep Planning
SO INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
LA English
DT Article
DE Footstep Planning; Stepping Over Obstacles
AB Humanoid robots posses the unique ability to cross obstacles by stepping over or
upon them. However, conventional 2D methods for robot navigation fail to exploit
this ability and thus design trajectories only by circumventing obstacles.
Recently, global algorithms have been presented that take into account this feature
of humanoids. However, due to high computational complexity, most of them are very
time consuming. In this paper, we present a novel approach to footstep planning in
obstacle cluttered environments that employs a human-like strategy to terrain
traversal. Design methodology for obstacle stepping over motion designed for use
with this algorithm is also presented. The paper puts forth simulation results of
footstep planning as well as experimental results for the stepping over trajectory
designed for use with hardware execution of the footstep plan.
C1 [Ayaz, Yasar; Konno, Atsushi; Tsujita, Teppei; Komizunai, Shunsuke; Uchiyama,
Masaru] Tohoku Univ, Dept Aerosp Engn, Grad Sch Engn, Sendai, Miyagi 980, Japan.
[Ayaz, Yasar] NUST, SMME, Dept Robot & Artificial Intelligence, Islamabad,
Pakistan.
[Munawar, Khalid] NUST, Dept Elect Engn, Coll Elect & Mech Engn, Islamabad,
Pakistan.
RP Ayaz, Y (reprint author), Tohoku Univ, Dept Aerosp Engn, Grad Sch Engn, Sendai,
Miyagi 980, Japan.
EM yasar@space.mech.tohoku.ac.jp
RI Munawar, Khalid/L-4828-2013
CR AYAZ Y, 2006, P IEEE RSJ INT C INT, P5490
CHESTNUTT J, 2005, P IEEE INT C ROB AUT, P629
CHESTNUTT J, 2004, P IEEE RAS INT C HUM
Cormen T., 1994, INTRO ALGORITHMS
Guan Y., 2005, P IEEE RSJ INT C INT, P364
Guan YH, 2006, IEEE T ROBOT, V22, P958, DOI 10.1109/TRO.2006.878962
Huang Q, 2001, IEEE T ROBOTIC AUTOM, V17, P280, DOI 10.1109/70.938385
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Kajita S., 2006, P IEEE RSJ INT C INT, P2993
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
Konno A, 2000, 2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS, P1565, DOI 10.1109/IROS.2000.895196
Kuffner J, 2005, SPR TRA ADV ROBOT, V15, P365
Kuffner JJ, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P500, DOI
10.1109/IROS.2001.973406
KUFFNER JJ, 2003, P IEEE INT C ROB AUT, P932
Latombe J.-C., 1991, ROBOT MOTION PLANNIN
MCGHEE RB, 1979, IEEE T SYST MAN CYB, V9, P176, DOI 10.1109/TSMC.1979.4310180
Michel P, 2006, IEEE INT CONF ROBOT, P3089, DOI 10.1109/ROBOT.2006.1642171
Tsujita T., 2010, IMPACT MOTION GENERA, P175
VERRELST B, 2006, P IEEE RAS INT C HUM, P117
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Yagi M, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P375, DOI 10.1109/ROBOT.1999.770007
NR 21
TC 4
Z9 4
U1 0
U2 1
PU SAGE PUBLICATIONS INC
PI THOUSAND OAKS
PA 2455 TELLER RD, THOUSAND OAKS, CA 91320 USA
SN 1729-8814
J9 INT J ADV ROBOT SYST
JI Int. J. Adv. Robot. Syst.
PD SEP
PY 2011
VL 8
IS 4
BP 98
EP 109
DI 10.5772/10671
PG 12
WC Robotics
SC Robotics
GA 837IU
UT WOS:000296190100011
OA gold
DA 2018-01-22
ER

PT J
AU Babic, J
Hale, JG
Oztop, E
AF Babic, Jan
Hale, Joshua G.
Oztop, Erhan
TI Human sensorimotor learning for humanoid robot skill synthesis
SO ADAPTIVE BEHAVIOR
LA English
DT Article
DE Sensorimotor learning; humanoid robot; balanced reaching; posture
control
ID MODEL
AB Humans are very skilled at learning new control tasks, and in particular, the
use of novel tools. In this article we propose a paradigm that utilizes this
sensorimotor learning capacity to obtain robot behaviors, which would otherwise
require manual programming by experts. The concept is to consider the target robot
platform as a tool to be controlled intuitively by a human. The human is therefore
provided with an interface designed to make the control of the robot intuitive, and
learns to perform a given task using the robot. This is akin to the stage where a
beginner learns to drive a car. After human learning, the skilled control of the
robot is used to build an autonomous controller so that the robot can perform the
task without human guidance. We demonstrate the feasibility of this proposal for
humanoid robot skill synthesis by showing how a statically stable reaching skill
can be obtained by means of this framework. In addition, we analyze the feedback
interface component of this paradigm by examining a dynamics task, in which a human
learns to use the motion of the body to control the posture of an inverted pendulum
that approximates a humanoid robot, so that it stays upright.
C1 [Babic, Jan] Jozef Stefan Inst, SL-1000 Ljubljana, Slovenia.
[Hale, Joshua G.] Cyberdyne Inc, Interact Devices Dynam Simulat & Comp Graph,
Ibaraki, Japan.
[Oztop, Erhan] ATR Cognit Mech Labs, Brain ICT Lab, Commun & Cognit Cybernet
Grp, NICT,Adv ICT Res Inst, Kyoto, Japan.
[Oztop, Erhan] Osaka Univ, Suita, Osaka 565, Japan.
RP Babic, J (reprint author), Jozef Stefan Inst, Cesta 39, SL-1000 Ljubljana,
Slovenia.
EM jan.babic@ijs.si; joshua_hale@cyberdyne.jp; erhan@atr.jp
RI Oztop, Erhan/K-8111-2012
OI Babic, Jan/0000-0002-1870-8264
FU Japan Society for Promotion of Science; Slovenian Ministry of Higher
Education, Science and Technology; Ministry of Education, Culture,
Sports, Science and Technology, Japan
FX This work was supported by the Japan Society for Promotion of Science
and the Slovenian Ministry of Higher Education, Science and Technology
(J. Babic); and the Global COE Program Center of Human-Friendly Robotics
Based on Cognitive Neuroscience at the Ministry of Education, Culture,
Sports, Science and Technology, Japan (E. Oztop).
CR Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
Bentivegna DC, 2004, ROBOT AUTON SYST, V47, P163, DOI
10.1016/j.robot.2004.03.010
Billard A, 2004, ROBOT AUTON SYST, V47, P65, DOI 10.1016/j.robot.2004.03.001
Billard A, 1999, ADAPT BEHAV, V7, P35, DOI 10.1177/105971239900700103
BOONE DC, 1979, J BONE JOINT SURG AM, V61, P756, DOI 10.2106/00004623-197961050-
00017
Breazeal C, 2002, TRENDS COGN SCI, V6, P481, DOI 10.1016/S1364-6613(02)02016-8
Chen JYC, 2007, IEEE T SYST MAN CY C, V37, P1231, DOI 10.1109/TSMCC.2007.905819
FITZPATRICK RC, 1992, J PHYSIOL-LONDON, V458, P69, DOI
10.1113/jphysiol.1992.sp019406
Goldenberg G, 1998, NEUROPSYCHOLOGIA, V36, P581, DOI 10.1016/S0028-
3932(97)00165-6
IMAMIZU H, 2010, IEICE T COMMUN, V91, P2102
Ito M, 2004, ADAPT BEHAV, V12, P93, DOI 10.1177/105971230401200202
Kushida D., 2001, Artificial Life and Robotics, V5, P26, DOI 10.1007/BF02481317
OZTOP E, 2007, IEEE INT C ROB AUT R
Oztop E., 2006, IEEE RAS INT C HUM R
Park J, 1991, NEURAL COMPUT, V3, P246, DOI 10.1162/neco.1991.3.2.246
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Shadmehr R., 2005, COMPUTATIONAL NEUROB
SHIDARA M, 1993, NATURE, V365, P50, DOI 10.1038/365050a0
Steinfeld A, 2006, P 1 ACM SIGCHI SIGAR, P33, DOI 10.1145/1121241.1121249
Ude A, 2010, IEEE T ROBOT, V26, P800, DOI 10.1109/TRO.2010.2065430
Wolpert DM, 2000, NAT NEUROSCI, V3, P1212, DOI 10.1038/81497
Wolpert DM, 2001, TRENDS COGN SCI, V5, P487, DOI 10.1016/S1364-6613(00)01773-3
NR 22
TC 12
Z9 12
U1 1
U2 7
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 1059-7123
EI 1741-2633
J9 ADAPT BEHAV
JI Adapt. Behav.
PD AUG
PY 2011
VL 19
IS 4
BP 250
EP 263
DI 10.1177/1059712311411112
PG 14
WC Computer Science, Artificial Intelligence; Psychology, Experimental;
Social Sciences, Interdisciplinary
SC Computer Science; Psychology; Social Sciences - Other Topics
GA 810VZ
UT WOS:000294157400002
DA 2018-01-22
ER

PT J
AU Suwanratchatamanee, K
Matsumoto, M
Hashimoto, S
AF Suwanratchatamanee, Kitti
Matsumoto, Mitsuharu
Hashimoto, Shuji
TI Haptic Sensing Foot System for Humanoid Robot and Ground Recognition
With One-Leg Balance
SO IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
LA English
DT Article
DE Ground floor recognition; haptic sensor; robots; tactile sensor;
human-robot interactions
ID TACTILE SENSOR
AB This paper presents a haptic sensing foot system for humanoid robot. The two
different kinds of implementations are investigated: One is an active tactile
sensing technique to recognize a contacting ground slope. The other is to balance
the robot body with one leg for human-robot interaction. The proposed sensors are
implemented on two robotic feet. Each sensing unit on each foot consists of three
thin sheets of force sensitive resistors arranged triangularly with the peripheral
circuits. The research objective is to produce an artifact which can be operated in
a natural and intuitive manner by utilizing the control of a foot pose to keep the
direction of the foot normal to the ground surface. Throughout these works, we aim
to realize the tactile sensing foot to detect the ground slope for natural foot
posture control in order to assist the biped walking robot to balance its body on
various types of ground surfaces. In these applications, the information about the
ground floor or orientation is not required in advance.
C1 [Suwanratchatamanee, Kitti; Hashimoto, Shuji] Waseda Univ, Grad Sch Adv Sci &
Engn, Shinjuku Ku, Tokyo 1698555, Japan.
[Matsumoto, Mitsuharu] Univ Electrocommun, Educ & Res Ctr Frontier Sci, Tokyo
1828585, Japan.
RP Suwanratchatamanee, K (reprint author), Waseda Univ, Grad Sch Adv Sci & Engn,
Shinjuku Ku, Tokyo 1698555, Japan.
EM kittiene@shalab.phys.waseda.ac.jp; mitsuharu.matsumoto@ieee.org;
shuji@waseda.jp
FU Ministry of Education, Culture, Sports, Science, and Technology;
Research Institute for Science and Engineering, Waseda University; CREST
of JST; Gifu Prefecture; Japan Society for the promotion of Science for
Young Scientists [DC2: 20-56621]; Support Center for Advanced
Telecommunications Technology Research; Foundation for the Fusion of
Science and Technology; Ministry of Education, Science, Sports, and
Culture [20700168]
FX This work was supported in part by "Global Robot Academia" Grant-in-Aid
for Global COE Program by the Ministry of Education, Culture, Sports,
Science, and Technology; by "Fundamental Study for Intelligent Machine
to Coexist with Nature," Research Institute for Science and Engineering,
Waseda University; by CREST project "Foundation of technology supporting
the creation of digital media contents" of JST; by the Grant-in-Aid for
the WABOT-HOUSE Project by Gifu Prefecture; by the research Fellowships
of the Japan Society for the promotion of Science for Young Scientists,
DC2: 20-56621; by the research grant of Support Center for Advanced
Telecommunications Technology Research; by the research grant of
Foundation for the Fusion of Science and Technology; and by the Ministry
of Education, Science, Sports, and Culture, Grant-in-Aid for Young
Scientists (B), 20700168, 2008. The research for this paper was
conducted as part of the humanoid project at the Humanoid Robotics
Institute, Waseda University.
CR Berenouer FJ, 2008, IEEE T IND ELECTRON, V55, P3281, DOI 10.1109/TIE.2008.927982
Castelli F, 2002, IEEE T IND APPL, V38, P85, DOI 10.1109/28.980361
Dario P., 1990, Proceedings. IROS '90. IEEE International Workshop on
Intelligent Robots and Systems '90. Towards a New Frontier of Applications (Cat.
No.90TH0332-7), P883, DOI 10.1109/IROS.1990.262509
DARIO P, 1985, IEEE SPECTRUM, V22, P46, DOI 10.1109/MSPEC.1985.6370785
Fu CL, 2008, IEEE T IND ELECTRON, V55, P2111, DOI 10.1109/TIE.2008.921205
HIKIJI R, 2000, P INT WORKSH HAPT HU, P113
HIRAISHI H, 1988, P ANN C IND EL SOC, P982
Iwata H, 2005, IEEE T IND ELECTRON, V52, P1468, DOI 10.1109/TIE.2005.858739
Jensen B, 2005, IEEE T IND ELECTRON, V52, P1530, DOI 10.1109/TIE.2005.858730
Jockusch J, 1997, IEEE INT CONF ROBOT, P3080, DOI 10.1109/ROBOT.1997.606756
KIKUUWE R, 2003, P 2003 IEEE INT C RO, P1539
Krishna GM, 2004, IEEE SENS J, V4, P691, DOI 10.1106/JSEN.2004.833505
Loffler K, 2004, IEEE T IND ELECTRON, V51, P972, DOI 10.1109/TIE.2004.834948
Maekawa H, 1996, IEEE INT CONF ROBOT, P2462, DOI 10.1109/ROBOT.1996.506533
MAENO T, 2003, P 2003 IEEE INT C RO, P1533
MATSUNAGA T, 2005, P IEEE ANN INT C MIC, P88
Minakata H, 2008, IEEE T IND ELECTRON, V55, P1271, DOI 10.1109/TIE.2007.911919
Ohashi E, 2007, IEEE T IND ELECTRON, V54, P1632, DOI 10.1109/TIE.2007.894728
RUCCI M, 1993, P INT ROB SYST, P20
SUWANRATCHATAMA.K, 2008, P 7 FRANC JPN 5 EUR
SUWANRATCHATAMA.K, 2009, P 5 IEEE INT C MECH
SUWANRATCHATAMA.K, 2007, P 33 IEEE ANN INT C, P245
SUWANRATCHATAMA.K, 2008, P 34 IEEE ANN C IND, P2617
SUWANRATCHATAMA.K, 2008, P 17 IEEE INT S ROB, P683
Suwanratchatamanee K., 2009, P 2 IEEE INT C HUM S, P68
SUWANRATCHATAMA.K, 2008, P IEEE INT STUD EXP
Suwanratchatamanee K, 2008, ADV ROBOTICS, V22, P867, DOI
10.1163/156855308X314551
*TEXSC INC, 2009, FLEXIFORCE US MAN
TREMBLAY MR, 1993, PROCEEDINGS : IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-3, P429, DOI 10.1109/ROBOT.1993.292018
Tsuji T, 2009, IEEE T IND ELECTRON, V56, P1375, DOI 10.1109/TIE.2009.2014748
Vadakkepat P, 2008, IEEE T IND ELECTRON, V55, P1385, DOI 10.1109/TIE.2007.903993
Yamada Y, 2005, IEEE T IND ELECTRON, V52, P960, DOI 10.1109/TIE.2005.851654
YOA HY, 2004, P 7 INT C MED IM COM, P89
NR 33
TC 25
Z9 25
U1 0
U2 14
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 0278-0046
EI 1557-9948
J9 IEEE T IND ELECTRON
JI IEEE Trans. Ind. Electron.
PD AUG
PY 2011
VL 58
IS 8
BP 3174
EP 3186
DI 10.1109/TIE.2009.2030217
PG 13
WC Automation & Control Systems; Engineering, Electrical & Electronic;
Instruments & Instrumentation
SC Automation & Control Systems; Engineering; Instruments & Instrumentation
GA 804WT
UT WOS:000293685700008
DA 2018-01-22
ER

PT J
AU Kanoun, O
Lamiraux, F
Wieber, PB
AF Kanoun, Oussama
Lamiraux, Florent
Wieber, Pierre-Brice
TI Kinematic Control of Redundant Manipulators: Generalizing the
Task-Priority Framework to Inequality Task
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Control; hierarchy; humanoid robot; inequality constraints; inverse
kinematics; redundancy; task priority
AB Redundant mechanical systems like humanoid robots are designed to fulfill
multiple tasks at a time. A task, in velocity-resolved inverse kinematics, is a
desired value for a function of the robot configuration that can be regulated with
an ordinary differential equation (ODE). When facing simultaneous tasks, the
corresponding equations can be grouped in a single system or, better, sorted in
priority and solved each in the solutions set of higher priority tasks. This
elegant framework for hierarchical task regulation has been implemented as a
sequence of least-squares problems. Its limitation lies in the handling of
inequality constraints, which are usually transformed into more restrictive
equality constraints through potential fields. In this paper, we propose a new
prioritized task-regulation framework based on a sequence of quadratic programs
(QP) that removes the limitation. At the basis of the proposed algorithm, there is
a study of the optimal sets resulting from the sequence of QPs. The algorithm is
implemented and illustrated in simulation on the humanoid robot HRP-2.
C1 [Kanoun, Oussama] Univ Tokyo, Nakamura Lab, Dept Mechanoinformat, Bunkyo Ku,
Tokyo 1138656, Japan.
[Lamiraux, Florent] Univ Toulouse, LAAS CNRS, F-31077 Toulouse, France.
[Lamiraux, Florent] Univ Toulouse, JRL, F-31077 Toulouse, France.
[Wieber, Pierre-Brice] INRIA Grenoble, F-38334 Saint Ismier, France.
[Wieber, Pierre-Brice] JRL, F-38334 Saint Ismier, France.
RP Kanoun, O (reprint author), Univ Tokyo, Nakamura Lab, Dept Mechanoinformat,
Bunkyo Ku, Tokyo 1138656, Japan.
EM okanoun@ynl.t.u-tokyo.ac.jp; florent@laas.fr;
pierre-brice.wieber@inria.fr
FU EADS Corporate Research Foundation; French Agence Nationale de la
Recherche [ANR-08-JCJC-0075-01]; European Community; JSPS
FX Manuscript received May 25, 2010; revised January 6, 2011; accepted
April 10, 2011. Date of publication May 12, 2011; date of current
version August 10, 2011. This paper was recommended for publication by
Associate Editor M. Vendittelli and Editor K. Lynch upon evaluation of
the reviewers' comments. A short version of this paper appeared in the
2009 IEEE International Conference on Robotics and Automation, Kobe,
Japan. This work has been conducted as a joint research in
AIST/IS-CNRS/ST2I Joint Japanese-French Robotics Laboratory (JRL). This
work was supported in part by the EADS Corporate Research Foundation and
in part by the French Agence Nationale de la Recherche under Grant
ANR-08-JCJC-0075-01. The work of P.-B. Wieber was supported by a Marie
Curie International Outgoing Fellowship within the 7th European
Community Framework Programme. The work of O. Kanoun was supported in
part by the JSPS Postdoctoral Fellowship for Foreign Researchers.
CR Antonelli G., 2009, P IEEE RSJ INT C INT, P5892
Bjorck A, 1996, NUMERICAL METHODS LE
Decre W., 2009, P IEEE INT C ROB AUT, P964
Escande A., 2010, P IEEE INT C ROB AUT, P3733
Faverjon B, 1987, P IEEE INT C ROB AUT, V4, P1152
FIACCO A, 1987, CLASSICS APPL MATH
Fletcher R., 1987, PRACTICAL METHODS OP
Khatib O, 1986, ADAPTIVE LEARNING SY, P367
KOREN Y, 1991, 1991 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS 1-3, P1398, DOI 10.1109/ROBOT.1991.131810
LIEGEOIS A, 1977, IEEE T SYST MAN CYB, V7, P868
Mansard N, 2007, IEEE T ROBOT, V23, P60, DOI 10.1109/TRO.2006.889487
Mansard N, 2009, IEEE T ROBOT, V25, P670, DOI 10.1109/TRO.2009.2020345
NAKAMURA Y, 1986, J DYN SYST-T ASME, V108, P163, DOI 10.1115/1.3143764
Nakamura Y., 1990, ADV ROBOTICS REDUNDA
PEINADO M, 2005, P EUR 2005 SHORT PRE, P93
Schittkowski K., 1986, QLD FORTRAN CODE QUA
SENTIS L, 2005, P IEEE INT C ROB AUT, P1718
Siciliano B., 1991, P IEEE INT C ADV ROB, P1211
NR 18
TC 97
Z9 98
U1 2
U2 20
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD AUG
PY 2011
VL 27
IS 4
BP 785
EP 792
DI 10.1109/TRO.2011.2142450
PG 8
WC Robotics
SC Robotics
GA 805UB
UT WOS:000293753400013
DA 2018-01-22
ER

PT J
AU Yussof, H
Capi, G
Nasu, Y
Yamano, M
Ohka, M
AF Yussof, Hanafiah
Capi, Genci
Nasu, Yasuo
Yamano, Mitsuhiro
Ohka, Masahiro
TI A CORBA-Based Control Architecture for Real-Time Teleoperation Tasks in
a Developmental Humanoid Robot
SO INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
LA English
DT Article
DE Humanoid robot; Common Object Request Broker Architecture (CORBA);
teleoperation; Humanoid Robot Control Architecture (HRCA); 3D mouse
system; optimal gait; kinematics
ID TORQUE-CHANGE MODEL; GENETIC ALGORITHMS
AB This paper presents the development of new Humanoid Robot Control Architecture
(HRCA) platform based on Common Object Request Broker Architecture (CORBA) in a
developmental biped humanoid robot for real-time teleoperation tasks. The objective
is to make the control platform open for collaborative teleoperation research in
humanoid robotics via the internet. Meanwhile, to generate optimal trajectory
generation in bipedal walk, we proposed a real time generation of optimal gait by
using Genetic Algorithms (GA) to minimize the energy for humanoid robot gait. In
addition, we proposed simplification of kinematical solutions to generate
controlled trajectories of humanoid robot legs in teleoperation tasks. The proposed
control systems and strategies was evaluated in teleoperation experiments between
Australia and Japan using humanoid robot Bonten-Maru. Additionally, we have
developed a user-friendly Virtual Reality (VR) user interface that is composed of
ultrasonic 3D mouse system and a Head Mounted Display (HMD) for working coexistence
of human and humanoid robot in teleoperation tasks. The teleoperation experiments
show good performance of the proposed system and control, and also verified the
good performance for working coexistence of human and humanoid robot.
C1 [Yussof, Hanafiah] Univ Teknologi MARA, Fac Mech Engn, Shah Alam, Malaysia.
[Capi, Genci] Toyama Univ, Fac Engn, Toyama, Japan.
[Nasu, Yasuo; Yamano, Mitsuhiro] Yamagata Univ, Fac Engn, Yamagata 990, Japan.
[Yussof, Hanafiah; Ohka, Masahiro] Nagoya Univ, Grad Sch Informat Sci, Nagoya,
Aichi 4648601, Japan.
RP Yussof, H (reprint author), Univ Teknologi MARA, Fac Mech Engn, Shah Alam,
Malaysia.
RI Yussof, Hanafiah/D-6413-2012
FU Japan Ministry of Education, Culture, Sports, Science and Technology
[18656079]
FX This report is a compilation of works on humanoid robot projects
conducted in Nasu Laboratory at Yamagata University, Japan. The authors
acknowledge all Nasu Laboratory members for their efforts and
contributions. A part of this research was supported by fiscal 2006
grants from the Japan Ministry of Education, Culture, Sports, Science
and Technology (Grant-in-Aid for Scientific Research in Exploratory
Research, No. 18656079).
CR Booch G., 1999, UNIFIED MODELING LAN
Capi G, 2001, ADV ROBOTICS, V15, P675, DOI 10.1163/156855301317035197
Capi G., 2002, INFORMATION PROCESSI, V43, P1039
Denavit J., 1995, J APPL MECH, V77, P215
Fowler M., 1997, UML DISTILLED APPL S
HANAFIAH Y, 2006, J SIMULATION SYSTEM, V7, P55
Harrison T. H., 1997, P OOPSLA 97
Hasunuma H, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P2246, DOI 10.1109/ROBOT.2002.1013566
Herrera F, 1998, ARTIF INTELL REV, V12, P265, DOI 10.1023/A:1006504901164
Kaneko S., 2005, WSEAS Transactions on Systems, V4, P561
Moubray T. J., 1997, INSIDE CORBA DISTRIB
Nakano E, 1999, J NEUROPHYSIOL, V81, P2140
Nasu Y., 2004, B YAMAGATA UNIVIVERS, V28
Nasu Y., 2003, P 7 WSEAS INT C SYST, P177
Neo E. S., 2002, P INT C INT ROB SYST
Pancerella C. M., 1996, P PLAG PLAY SOFTW AG
Takeda K, 2001, IND ROBOT, V28, P242, DOI 10.1108/01439910110389407
UNO Y, 1989, BIOL CYBERN, V61, P89
Vinoski S, 1997, IEEE COMMUNICATIONS, V14, P1
Whiteside R. A., 1997, P HAW INT C SYST SCI
YOKOI K, 2003, P 2003 IEEE RSJ INT
Yu Z., 2001, Journal of the Japan Society of Precision Engineering, V67, P764
NR 22
TC 1
Z9 1
U1 0
U2 6
PU INTECH EUROPE
PI RIJEKA
PA JANEZA TRDINE 9, RIJEKA, 51000, CROATIA
SN 1729-8806
EI 1729-8814
J9 INT J ADV ROBOT SYST
JI Int. J. Adv. Robot. Syst.
PD JUN
PY 2011
VL 8
IS 2
BP 29
EP 48
PG 20
WC Robotics
SC Robotics
GA 837IG
UT WOS:000296187400004
DA 2018-01-22
ER

PT J
AU Kanoun, O
Laumond, JP
Yoshida, E
AF Kanoun, Oussama
Laumond, Jean-Paul
Yoshida, Eiichi
TI Planning foot placements for a humanoid robot: A problem of inverse
kinematics
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE Footstep planning; motion deformation; inverse kinematics
ID MOTION; HRP-2
AB We present a novel approach to plan foot placements for a humanoid robot
according to kinematic tasks. In this approach, the foot placements are determined
by the continuous deformation of a robot motion including a locomotion phase
according to the desired tasks. We propose to represent the motion by a virtual
kinematic tree composed of a kinematic model of the robot and articulated foot
placements. This representation allows us to formulate the motion deformation
problem as a classical inverse kinematics problem on a kinematic tree. We first
provide details of the basic scheme where the number of footsteps is given in
advance and illustrate it with scenarios on the robot HRP-2. Then we propose a
general criterion and an algorithm to adapt the number of footsteps progressively
to the kinematic goal. The limits and possible extensions of this approach are
discussed last.
C1 [Kanoun, Oussama; Laumond, Jean-Paul] LAAS CNRS, F-31077 Toulouse, France.
[Yoshida, Eiichi] AIST, Tsukuba, Ibaraki, Japan.
RP Laumond, JP (reprint author), LAAS CNRS, 7 Ave Colonel Roche, F-31077 Toulouse,
France.
EM jpl@laas.fr
RI Yoshida, Eiichi/M-3756-2016
OI Yoshida, Eiichi/0000-0002-3077-6964
FU JST-CNRS; ANR; FUI; [21300078]
FX This work has been partly supported by Grant-in-Aid for Scientific
Research (B) 21300078 and JST-CNRS Strategic Japanese-French Cooperative
Program "Robot motion planning and execution through online information
structuring in real-world environment". This work has also been
supported by the ANR Project Locanthrope and by the FUI Project Romeo.
CR AKACHI K, 2006, NIPPON ROBOTTO GAKKA, P24
Baerlocher P, 2004, VISUAL COMPUT, V20, P402, DOI 10.1007/s00371-004-0244-4
BARRAQUAND J, 1992, IEEE T SYST MAN CYB, V22, P224, DOI 10.1109/21.148426
BARRAQUAND J, 1991, INT J ROBOT RES, V10, P628, DOI 10.1177/027836499101000604
Chestnutt J., 2009, IEEE INT C INT ROB S, P3519
DIANKOV R, 2008, ROB SCI SYST C
Escande A, 2009, SPR TRA ADV ROBOT, V54, P293
Faverjon B., 1987, IEEE INT C ROB AUT, V4, P1152
FIACCO A, 1987, CLASSICS APPL MATH
Kajita S., 2003, IEEE INT C ROB AUT I, V2, P1620
KANEHIRO F, 2008, ROB SCI SYST C
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Kanoun O., 2009, IEEE INT C ROB AUT, P2939
Khatib O., 2004, INT J HUM ROBOT, V1, P29, DOI [10.1142/S0219843604000058, DOI
10.1142/S0219843604000058]
KUFFNER J, 2001, IEEE INT C INT ROB S
KUFFNER J, 2003, IEEE INT C ROB AUT, V1, P932
Kuffner JJ, 2002, AUTON ROBOT, V12, P105, DOI 10.1023/A:1013219111657
LIEGEOIS A, 1977, IEEE T SYST MAN CYB, V7, P868
Mansard N, 2007, IEEE T ROBOT, V23, P60, DOI 10.1109/TRO.2006.889487
Nakamura Y, 1991, ADV ROBOTICS REDUNDA
Sakagami Y, 2002, IEEE RSJ INT C INT R, V3
Siciliano B., 1991, IEEE INT C ADV ROB I, P1211
Sreenivasa MN, 2009, IEEE INT C INT ROB S, P4451
YOSHIDA E, 2007, IEEE RAS INT C HUM R, P89
YOSHIDA E, 2006, IEEE RAS INT C HUM R, P208
Yoshida E, 2008, IEEE T ROBOT, V24, P1186, DOI 10.1109/TRO.2008.2002312
NR 26
TC 22
Z9 23
U1 0
U2 7
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD APR
PY 2011
VL 30
IS 4
BP 476
EP 485
DI 10.1177/0278364910371238
PG 10
WC Robotics
SC Robotics
GA 743EW
UT WOS:000289001500005
DA 2018-01-22
ER

PT J
AU Sato, T
Sakaino, S
Ohashi, E
Ohnishi, K
AF Sato, Tomoya
Sakaino, Sho
Ohashi, Eijiro
Ohnishi, Kouhei
TI Walking Trajectory Planning on Stairs Using Virtual Slope for Biped
Robots
SO IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
LA English
DT Article
DE Biped robot; stairs; trajectory planning; walking; zero-moment point
(ZMP)
ID HUMANOID ROBOT; PATTERN GENERATION; GAIT SYNTHESIS; MOTION CONTROL;
LEGGED ROBOTS; SURFACE; LOCOMOTION; UNEVEN; FLOOR; ZMP
AB In this paper, a "virtual slope method" for walking trajectory planning on
stairs for biped robots is proposed. In conventional methods for walking on stairs,
there are two problems about the zero-moment point (ZMP). One is a ZMP equation
problem, and the other is a ZMP definition problem in a double-support phase.
First, a ZMP equation on stairs is different from that on flat ground. Therefore,
the same trajectory generation as flat ground cannot be implemented. This problem
is defined as a "ZMP equation problem." Second, the ZMP cannot be defined in the
double-support phase on stairs because contact points of the feet do not constitute
a plane. The ZMP can be defined only on the plane. This problem is defined as a
"ZMP definition problem." These two problems are solved concurrently by the virtual
slope method. It is the method that regards the stairs as a virtual slope. In
walking trajectory planning on a slope of the constant gradient, the two problems
about the ZMP do not exist. Additionally, a trajectory planning procedure based on
the virtual slope method is explained. The validity of the proposed method is
confirmed by some simulations and experiments.
C1 [Sato, Tomoya; Sakaino, Sho; Ohashi, Eijiro; Ohnishi, Kouhei] Keio Univ, Ohnishi
Lab, Dept Syst Design Engn, Yokohama, Kanagawa 2238522, Japan.
RP Sato, T (reprint author), Keio Univ, Ohnishi Lab, Dept Syst Design Engn,
Yokohama, Kanagawa 2238522, Japan.
EM sat@sum.sd.keio.ac.jp; sakaino@sum.sd.keio.ac.jp;
ei2ro@sum.sd.keio.ac.jp; ohnishi@sd.keio.ac.jp
RI Ohnishi, Kouhei/A-6543-2011; Sakaino, Sho/B-6249-2013
FU Ministry of Education, Culture, Sports, Science and Technology in Japan
FX This work was supported in part by a Grant-in-Aid for the Global Center
of Excellence for High-Level Global Cooperation for Leading-Edge
Platform on Access Spaces from the Ministry of Education, Culture,
Sports, Science and Technology in Japan.
CR Dong H, 2009, ROBOT AUTON SYST, V57, P828, DOI 10.1016/j.robot.2009.03.009
Erbatur K, 2009, IEEE T IND ELECTRON, V56, P835, DOI 10.1109/TIE.2008.2005150
Fu CL, 2008, IEEE T IND ELECTRON, V55, P2111, DOI 10.1109/TIE.2008.921205
Goswami A, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P47, DOI 10.1109/ROBOT.1999.769929
Harada K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P75
Harada K, 2006, INT J HUM ROBOT, V3, P1, DOI 10.1142/S0219843606000643
Hirukawa H, 2006, IEEE INT CONF ROBOT, P1976, DOI 10.1109/ROBOT.2006.1641995
Hu LY, 2008, IEEE T IND ELECTRON, V55, P1444, DOI 10.1109/TIE.2007.908526
Jeon KS, 2004, P INT C INT ROB SYST, P2837
KAJITA H, 2005, HUMANOID ROBOT
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kim JY, 2007, J INTELL ROBOT SYST, V48, P457, DOI 10.1007/s10846-006-9107-8
Loffler K, 2004, IEEE T IND ELECTRON, V51, P972, DOI 10.1109/TIE.2004.834948
Michel P., 2007, P IEEE RSJ INT C INT, P463
MICHEL P, 2005, P IEEE RAS INT C HUM, P13
Minakata H, 2008, IEEE T IND ELECTRON, V55, P1271, DOI 10.1109/TIE.2007.911919
MITOBE K, 2004, P IEEE RSJ INT C INT, P2253
Morisawa M, 2005, IEEE INT CONF ROBOT, P2405
Motoi N, 2007, IEEE T IND INFORM, V3, P154, DOI 10.1109/TII.2007.898469
Motoi N, 2009, IEEE T IND ELECTRON, V56, P54, DOI 10.1109/TIE.2008.2004663
Ohashi E., 2004, P 30 ANN C IEEE IND, V1, P117
Ohashi E, 2007, IEEE T IND ELECTRON, V54, P1632, DOI 10.1109/TIE.2007.894728
Ohnishi K, 1996, IEEE-ASME T MECH, V1, P56, DOI 10.1109/3516.491410
Ono T., 1998, P 5 INT WORKSH ADV M, P129
Pratt J, 2001, INT J ROBOT RES, V20, P129, DOI 10.1177/02783640122067309
Russell S, 2005, J BIOMECH ENG-T ASME, V127, P114, DOI 10.1115/1.1835358
Sato T., 2009, P IEEE INT C IND TEC, P1132
SATO T, 2009, IEEJ T IND APPL D, V129, P571
Shibuya M., 2006, P 32 ANN C IEEE IND, P4094
Shih CL, 1998, IEEE T SYST MAN CY B, V28, P244, DOI 10.1109/3477.662765
Sugihara T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1404, DOI 10.1109/ROBOT.2002.1014740
Takao S, 2003, IEEE INT CONF ROBOT, P3815
Tan GZ, 1996, IEEE INT CONF ROBOT, P252, DOI 10.1109/ROBOT.1996.503786
Tsuji T., 2002, P 7 INT WORKSH ADV M, P478
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
ZHENG YF, 1990, IEEE T ROBOTIC AUTOM, V6, P86, DOI 10.1109/70.88120
Zheng Y. F., 1998, P IEEE INT C ROB AUT, P814
Zhou C., 2004, P IEEE C ROB AUT MEC, P341
ZHU C, 2004, P 8 IEEE INT WORKSH, P427
NR 40
TC 36
Z9 36
U1 1
U2 22
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 0278-0046
EI 1557-9948
J9 IEEE T IND ELECTRON
JI IEEE Trans. Ind. Electron.
PD APR
PY 2011
VL 58
IS 4
BP 1385
EP 1396
DI 10.1109/TIE.2010.2050753
PG 12
WC Automation & Control Systems; Engineering, Electrical & Electronic;
Instruments & Instrumentation
SC Automation & Control Systems; Engineering; Instruments & Instrumentation
GA 734KZ
UT WOS:000288334300030
DA 2018-01-22
ER

PT J
AU Sato, T
Sakaino, S
Ohnishi, K
AF Sato, Tomoya
Sakaino, Sho
Ohnishi, Kouhei
TI Real-Time Walking Trajectory Generation Method With Three-Mass Models at
Constant Body Height for Three-Dimensional Biped Robots
SO IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
LA English
DT Article; Proceedings Paper
CT IEEE International Conference on Industrial Technology
CY FEB 10-13, 2009
CL Churchill, AUSTRALIA
SP IEEE
DE Biped robot; trajectory planning; walking; zero-moment point (ZMP)
ID INVERTED PENDULUM MODE; HUMANOID ROBOT; PATTERN GENERATION; GAIT
SYNTHESIS; LEGGED ROBOTS; ALGORITHMS; ZMP
AB In this paper, a real-time walking trajectory generation method with three-mass
models at constant body height for 3-D biped robots is extended for a diagonal
walking. By realization of the diagonal walking, the availability is improved. The
modeling of this method is more precise than that of conventional real-time walking
trajectory generation methods. In this method, the zero-moment point equation of a
body is derived, and an analytic solution of a body trajectory at a constant body
height in a single support phase is obtained. Because the analytic solution is
used, real-time trajectory generation can be realized. In addition, this method has
advantages of the body trajectory at the constant body height. The validities are
confirmed from simulations of the 2-D walking and an experiment of the 3-D diagonal
walking.
C1 [Sato, Tomoya; Sakaino, Sho; Ohnishi, Kouhei] Keio Univ, Dept Syst Design Engn,
Ohnishi Lab, Yokohama, Kanagawa 2238522, Japan.
RP Sato, T (reprint author), Keio Univ, Dept Syst Design Engn, Ohnishi Lab,
Yokohama, Kanagawa 2238522, Japan.
EM sat@sum.sd.keio.ac.jp; sakaino@sum.sd.keio.ac.jp; ohnishi@sd.keio.ac.jp
RI Sakaino, Sho/B-6249-2013; Ohnishi, Kouhei/A-6543-2011
CR Albert A, 2003, J INTELL ROBOT SYST, V36, P109, DOI 10.1023/A:1022600522613
Capi G, 2001, ADV ROBOTICS, V15, P675, DOI 10.1163/156855301317035197
Erbatur K., 2006, P 32 ANN C IEEE IND, P4100
Erbatur K, 2009, IEEE T IND ELECTRON, V56, P835, DOI 10.1109/TIE.2008.2005150
Fu CL, 2008, IEEE T IND ELECTRON, V55, P2111, DOI 10.1109/TIE.2008.921205
Goswami A, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P47, DOI 10.1109/ROBOT.1999.769929
Handharu N., 2008, P 8 IEEE RAS INT C H, P265
Harada K, 2006, INT J HUM ROBOT, V3, P1, DOI 10.1142/S0219843606000643
Hirukawa H, 2005, INT J ROBOT RES, V24, P755, DOI 10.1177/0278364905057217
Hirukawa H, 2006, IEEE INT CONF ROBOT, P1976, DOI 10.1109/ROBOT.2006.1641995
Hu LY, 2008, IEEE T IND ELECTRON, V55, P1444, DOI 10.1109/TIE.2007.908526
Jeon KS, 2004, P INT C INT ROB SYST, P2837
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kajita S, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P31, DOI 10.1109/ROBOT.2002.1013335
Kumar R, 2007, P INT C CONTR AUT SY, P2687
Kurazume R, 2003, IEEE INT CONF ROBOT, P925
Morisawa M, 2005, IEEE INT CONF ROBOT, P2405
Motoi N, 2009, IEEE T IND ELECTRON, V56, P54, DOI 10.1109/TIE.2008.2004663
Napoleon N., 2004, J ROBOT SOC JPN, V22, P656
Ohashi E., 2004, P 30 ANN C IEEE IND, V1, P117
Ohashi E, 2007, IEEE T IND ELECTRON, V54, P1632, DOI 10.1109/TIE.2007.894728
Ohnishi K, 1996, IEEE-ASME T MECH, V1, P56, DOI 10.1109/3516.491410
Park JH, 1998, IEEE INT CONF ROBOT, P3528, DOI 10.1109/ROBOT.1998.680985
PARK JH, 2000, P IEEE INT C ROB AUT, P3353
Sato T., 2009, P IEEE INT C IND TEC, P1132
Sato T., 2008, P INT WORKSH VIS COM, P19
Sato T., 2008, P 34 ANN C IEEE IND, P1650
Shibuya M., 2006, P 32 ANN C IEEE IND, P4094
Takao S, 2003, IEEE INT CONF ROBOT, P3815
Terada K, 2007, IEEE-RAS INT C HUMAN, P222, DOI 10.1109/ICHR.2007.4813872
Tsuji T., 2002, P 7 INT WORKSH ADV M, P478
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
NR 33
TC 28
Z9 29
U1 0
U2 8
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 0278-0046
EI 1557-9948
J9 IEEE T IND ELECTRON
JI IEEE Trans. Ind. Electron.
PD FEB
PY 2011
VL 58
IS 2
BP 376
EP 383
DI 10.1109/TIE.2010.2052535
PG 8
WC Automation & Control Systems; Engineering, Electrical & Electronic;
Instruments & Instrumentation
SC Automation & Control Systems; Engineering; Instruments & Instrumentation
GA 705DQ
UT WOS:000286109500002
DA 2018-01-22
ER
PT J
AU Aoi, S
Tsuchiya, K
AF Aoi, Shinya
Tsuchiya, Kazuo
TI Generation of bipedal walking through interactions among the robot
dynamics, the oscillator dynamics, and the environment: Stability
characteristics of a five-link planar biped robot
SO AUTONOMOUS ROBOTS
LA English
DT Article
DE Biped robot; Oscillator; Central pattern generator; Phase resetting;
Stability analysis; Poincare map; Optimization
ID COUPLED NONLINEAR OSCILLATORS; CENTRAL PATTERN GENERATORS;
MUSCULO-SKELETAL SYSTEM; LOCOMOTION CONTROL; QUADRUPED ROBOT;
MODEL-DRIVEN; BIOLOGICAL CONCEPTS; RHYTHMIC SIGNAL; HUMANOID ROBOT;
PHASE RESET
AB We previously developed a locomotion control system for a biped robot using
nonlinear oscillators and verified the performance of this system in order to
establish adaptive walking through the interactions among the robot dynamics, the
oscillator dynamics, and the environment. In order to clarify these mechanisms, we
investigate the stability characteristics of walking using a five-link planar biped
robot with a torso and knee joints that has an internal oscillator with a stable
limit cycle to generate the joint motions. Herein we conduct numerical simulations
and a stability analysis, where we analytically obtain approximate periodic
solutions and examine local stability using a Poincar, map. These analyses reveal
(1) stability characteristics due to locomotion speed, torso, and knee motion, (2)
stability improvement due to the modulation of oscillator states based on phase
resetting using foot-contact information, and (3) the optimal parameter in the
oscillator dynamics for adequately exploiting the interactions among the robot
dynamics, the oscillator dynamics, and the environment in order to increase walking
stability. The results of the present study demonstrate the advantage and
usefulness of locomotion control using oscillators through mutual interactions.
C1 [Aoi, Shinya] Kyoto Univ, Grad Sch Engn, Dept Aeronaut & Astronaut, Sakyo Ku,
Kyoto 6068501, Japan.
[Tsuchiya, Kazuo] Doshisha Univ, Dept Energy & Mech Engn, Fac Sci & Engn, Kyoto
6100394, Japan.
[Aoi, Shinya; Tsuchiya, Kazuo] JST, CREST, Chiyoda Ku, Tokyo 1020075, Japan.
RP Aoi, S (reprint author), Kyoto Univ, Grad Sch Engn, Dept Aeronaut & Astronaut,
Sakyo Ku, Kyoto 6068501, Japan.
EM shinya_aoi@kuaero.kyoto-u.ac.jp
FU Japanese Ministry of Education, Culture, Sports, Science and Technology
[19GS0208]
FX This paper is supported in part by a Grant-in-Aid for Creative
Scientific Research No. 19GS0208 from the Japanese Ministry of
Education, Culture, Sports, Science and Technology.
CR Altendorfer R, 2001, AUTON ROBOT, V11, P207, DOI 10.1023/A:1012426720699
Aoi S, 2006, IEEE T ROBOT, V22, P391, DOI 10.1109/TRO.2006.870671
Aoi S, 2006, INT J NONLIN MECH, V41, P438, DOI 10.1016/j.ijnonlinmec.2005.09.001
Aoi S, 2005, AUTON ROBOT, V19, P219, DOI 10.1007/s10514-005-4051-1
Aoi S, 2007, SIAM J APPL DYN SYST, V6, P348, DOI 10.1137/060664756
Aoi S, 2007, NONLINEAR DYNAM, V48, P1, DOI 10.1007/s11071-006-9030-3
Aoi S, 2007, AUTON ROBOT, V23, P37, DOI 10.1007/s10514-007-9029-8
Aoi S, 2010, BIOL CYBERN, V102, P373, DOI 10.1007/s00422-010-0373-y
Aoi S, 2008, ADV ROBOTICS, V22, P1697, DOI 10.1163/156855308X3689785
Asano F, 2001, IEEE T SYST MAN CY A, V31, P737, DOI 10.1109/3468.983431
Burke RE, 2001, J NEUROPHYSIOL, V86, P447
Cham JG, 2004, INT J ROBOT RES, V23, P141, DOI 10.1177/0278364904041323
Coleman MJ, 1997, DYNAM STABIL SYST, V12, P139, DOI 10.1080/02681119708806242
COLLINS JJ, 1993, J NONLINEAR SCI, V3, P349, DOI 10.1007/BF02429870
Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Courtine G, 2003, EUR J NEUROSCI, V18, P191, DOI 10.1046/j.1460-
9568.2003.02737.x
de Pina AC, 2005, BIOL CYBERN, V92, P1, DOI 10.1007/s00422-004-0531-1
Dutra MS, 2003, BIOL CYBERN, V88, P286, DOI 10.1007/s00422-002-0380-8
Fukuoka Y, 2003, INT J ROBOT RES, V22, P187, DOI 10.1177/0278364903022003004
Garcia M, 1998, J BIOMECH ENG-T ASME, V120, P281, DOI 10.1115/1.2798313
Goswami A, 1997, AUTON ROBOT, V4, P273, DOI 10.1023/A:1008844026298
GRILLNER S, 1985, SCIENCE, V228, P143, DOI 10.1126/science.3975635
Grillner S., 1981, HDB PHYSL 1, VI, P1179
Grizzle JW, 2001, IEEE T AUTOMAT CONTR, V46, P51, DOI 10.1109/9.898695
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hosoda K, 2008, ROBOT AUTON SYST, V56, P46, DOI 10.1016/j.robot.2007.09.010
Ijspeert AJ, 2008, NEURAL NETWORKS, V21, P642, DOI 10.1016/j.neunet.2008.03.014
Ijspeert AJ, 2007, SCIENCE, V315, P1416, DOI 10.1126/science.1138353
Inagaki S, 2003, ROBOT AUTON SYST, V44, P171, DOI 10.1016/S0921-8890(03)00067-8
Inoue K, 2004, IEEE INT CONF ROBOT, P5064, DOI 10.1109/ROBOT.2004.1302520
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
KATOH R, 1984, AUTOMATICA, V20, P405, DOI 10.1016/0005-1098(84)90099-2
Kimura H, 2007, INT J ROBOT RES, V26, P475, DOI 10.1177/0278364907078089
Kuo AD, 2002, J BIOMECH ENG-T ASME, V124, P113, DOI 10.1115/1.1427703
Kuroki Y, 2003, IEEE INT CONF ROBOT, P471
Lewis MA, 2002, AUTON ROBOT, V12, P301, DOI 10.1023/A:1015221832567
Lewis MA, 2003, BIOL CYBERN, V88, P137, DOI 10.1007/s00422-002-0365-7
Loffler K, 2003, INT J ROBOT RES, V22, P229, DOI 10.1177/0278364903022003007
MCGEER T, 1990, INT J ROBOT RES, V9, P62, DOI 10.1177/027836499000900206
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Nakanishi M, 2006, BIOL CYBERN, V95, P503, DOI 10.1007/s00422-006-0102-8
Nishiwaki K., 2000, IEEE RSJ INT C INT R, P1559
Orlovsky G, 1999, NEURONAL CONTROL LOC
PATLA AE, 1985, AM J PHYSIOL, V248, pR484
Poulakakis I, 2005, INT J ROBOT RES, V24, P239, DOI 10.1177/0278364904050917
Quinn RD, 2003, INT J ROBOT RES, V22, P169, DOI 10.1177/0278364903022003003
Righetti L, 2006, IEEE INT CONF ROBOT, P1585, DOI 10.1109/ROBOT.2006.1641933
Rybak IA, 2006, J PHYSIOL-LONDON, V577, P641, DOI 10.1113/jphysiol.2006.118711
Saranli U, 2001, INT J ROBOT RES, V20, P616, DOI 10.1177/02783640122067570
Schtner G., 1990, J THEOR BIOL, V142, P359
TAGA G, 1995, BIOL CYBERN, V73, P113, DOI 10.1007/BF00204049
TAGA G, 1995, BIOL CYBERN, V73, P97, DOI 10.1007/BF00204048
TAGA G, 1991, BIOL CYBERN, V65, P147, DOI 10.1007/BF00198086
Takuma T, 2006, INT J ROBOT RES, V25, P861, DOI 10.1177/0278364906069187
Tsujita K, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P2318, DOI
10.1109/IROS.2001.976416
Vukobratovic M, 1990, BIPED LOCOMOTION DYN
Westervelt ER, 2003, IEEE T AUTOMAT CONTR, V48, P42, DOI 10.1109/TAC.2002.806653
Winter D.A., 2004, BIOMECHANICS MOTOR C
Wisse M, 2005, IEEE T ROBOT, V21, P393, DOI 10.1109/TRO.2004.838030
Yamaguchi J, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P368, DOI 10.1109/ROBOT.1999.770006
Yamasaki T, 2003, BIOL CYBERN, V88, P468, DOI 10.1007/s00422-003-0402-1
Yano M., 2002, DISTRIBUTED AUTONOMO, V5, P444
YUASA H, 1990, BIOL CYBERN, V63, P177, DOI 10.1007/BF00195856
Zielinska T, 1996, BIOL CYBERN, V74, P263
NR 64
TC 17
Z9 17
U1 3
U2 27
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 0929-5593
EI 1573-7527
J9 AUTON ROBOT
JI Auton. Robot.
PD FEB
PY 2011
VL 30
IS 2
BP 123
EP 141
DI 10.1007/s10514-010-9209-9
PG 19
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA 705AO
UT WOS:000286100300001
DA 2018-01-22
ER

PT J
AU Yamane, K
Yamaguchi, Y
Nakamura, Y
AF Yamane, Katsu
Yamaguchi, Yoshifumi
Nakamura, Yoshihiko
TI Human motion database with a binary tree and node transition graphs
SO AUTONOMOUS ROBOTS
LA English
DT Article
DE Motion database; Binary tree; Human motion recognition; Humanoid motion
planning
ID IMITATION; TASKS
AB Database of human motion has been widely used for recognizing human motion and
synthesizing humanoid motions. In this paper, we propose a data structure for
storing and extracting human motion data and demonstrate that the database can be
applied to the recognition and motion synthesis problems in robotics. We develop an
efficient method for building a human motion database from a collection of
continuous, multi-dimensional motion clips. The database consists of a binary tree
representing the hierarchical clustering of the states observed in the motion
clips, as well as node transition graphs representing the possible transitions
among the nodes in the binary tree. Using databases constructed from real human
motion data, we demonstrate that the proposed data structure can be used for human
motion recognition, state estimation and prediction, and robot motion planning.
C1 [Yamane, Katsu] Disney Res, Pittsburgh, PA 15213 USA.
[Yamaguchi, Yoshifumi; Nakamura, Yoshihiko] Univ Tokyo, Dept Mechanoinformat,
Tokyo, Japan.
RP Yamane, K (reprint author), Disney Res, Pittsburgh, PA 15213 USA.
EM kyamane@disneyresearch.com; yamaguti@ynl.t.u-tokyo.ac.jp;
nakamura@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102
FU Ministry of Education, Culture, Sports, Science and Technology, Japan
through IRT Foundation to Support Man and Aging Society
FX Part of this work was conducted while the first author was at University
of Tokyo. The authors gratefully acknowledge the support by the Ministry
of Education, Culture, Sports, Science and Technology, Japan through the
Special Coordination Funds for Promoting Science and Technology, "IRT
Foundation to Support Man and Aging Society."
CR Arikan O, 2002, ACM T GRAPHIC, V21, P483
Artac M, 2002, INT C PATT RECOG, P781, DOI 10.1109/ICPR.2002.1048133
Bentivegna DC, 2004, ROBOT AUTON SYST, V47, P163, DOI
10.1016/j.robot.2004.03.010
Billard AG, 2006, ROBOT AUTON SYST, V54, P370, DOI 10.1016/j.robot.2006.01.007
Brand M, 2000, COMP GRAPH, P183
Breazeal C, 2002, TRENDS COGN SCI, V6, P481, DOI 10.1016/S1364-6613(02)02016-8
Ijspeert AJ, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS I-IV, PROCEEDINGS, P1398, DOI 10.1109/ROBOT.2002.1014739
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Kadone H., 2005, P 2005 IEEE RSJ INT, P2900
KITTLER J, 1986, PATTERN RECOGN, V19, P41, DOI 10.1016/0031-3203(86)90030-0
Kovar L, 2002, ACM T GRAPHIC, V21, P473
Kulic D, 2008, INT J ROBOT RES, V27, P761, DOI 10.1177/0278364908091153
Lee JH, 2002, ACM T GRAPHIC, V21, P491
NAKAMURA Y, 1986, J DYN SYST-T ASME, V108, P163, DOI 10.1115/1.3143764
Safonova A, 2007, ACM T GRAPHIC, V26, DOI 10.1145/1239451.1239557
Schaal S, 2003, PHILOS T R SOC B, V358, P537, DOI 10.1098/rstb.2002.1258
Sidenbladh H, 2002, ECCV, P784
VAISHNAVI VK, 1989, IEEE T COMPUT, V38, P968, DOI 10.1109/12.30849
WARD JH, 1963, J AM STAT ASSOC, V58, P236, DOI 10.2307/2282967
NR 19
TC 12
Z9 12
U1 0
U2 12
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 0929-5593
EI 1573-7527
J9 AUTON ROBOT
JI Auton. Robot.
PD JAN
PY 2011
VL 30
IS 1
SI SI
BP 87
EP 98
DI 10.1007/s10514-010-9206-z
PG 12
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA 702DB
UT WOS:000285873700007
DA 2018-01-22
ER

PT J
AU Hashimoto, K
Lim, HO
Takanishi, A
AF Hashimoto, Kenji
Lim, Hun-Ok
Takanishi, Atsuo
TI Disturbance Compensation Control for a Biped Vehicle
SO ADVANCED ROBOTICS
LA English
DT Article
DE Biped robot; biped vehicle; human-carrying robot; disturbance
ID HUMANOID ROBOT
AB This paper describes an avoidance behavior against unknown forces acting from
outside the system on a biped vehicle. External forces that act on a biped vehicle
can be divided into two categories: disturbances from the rider of the vehicle and
disturbances from outside the system. Disturbances from the rider are measured by a
force sensor placed between the rider's seat and the pelvis, while disturbances
from outside the system are estimated from zero moment point (ZMP) errors. To
guarantee walking stability, the waist position is adjusted to match the measured
ZMP to the reference ZMP and the position of the landing foot is adjusted such that
the waist trajectory does not diverge. On implementing the developed method on the
human-carrying biped robot in an experimental setup, the robot could realize a
stable walk even while subjected to unknown external forces in the environment. On
applying a pushing force to the walking robot, the robot moved in the pushed
direction and away from the source of the externally generated forces. On pushing
the robot that was walking forward, the robot stopped moving forward and avoided
getting closer to the source of the externally generated forces. We confirmed the
effectiveness of the proposed control through these experiments. (C) Koninklijke
Brill NV, Leiden and The Robotics Society of Japan, 2011
C1 [Hashimoto, Kenji] Waseda Univ, Grad Sch Creat Sci & Engn, Shinjuku Ku, Tokyo
1620044, Japan.
[Lim, Hun-Ok] Kanagawa Univ, Dept Mech Engn, Kanagawa Ku, Kanagawa 2218686,
Japan.
[Lim, Hun-Ok; Takanishi, Atsuo] Waseda Univ, HRI, Shinjuku Ku, Tokyo 1620044,
Japan.
[Takanishi, Atsuo] Waseda Univ, Dept Modern Mech Engn, Shinjuku Ku, Tokyo
1628480, Japan.
RP Hashimoto, K (reprint author), Waseda Univ, Grad Sch Creat Sci & Engn, Shinjuku
Ku, 41-304,17 Kikui Cho, Tokyo 1620044, Japan.
EM k-hashimoto@takanishi.mech.waseda.ac.jp
RI Hashimoto, Kenji/M-6442-2017
OI Hashimoto, Kenji/0000-0003-2300-6766
FU Ministry of Education, Science, Sports and Culture [19760179]; TMSUK
Co., Ltd; HEPHAIST Seiko Co., Ltd; SolidWorks Japan KK
FX This study was conducted at the Advanced Research Institute for Science
and Engineering, Waseda University, and at the humanoid project at the
Humanoid Robotics Institute, Waseda University. It was supported in part
by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid
for Young Scientists (B) 19760179, 2007, and by TMSUK Co., Ltd, HEPHAIST
Seiko Co., Ltd, and SolidWorks Japan KK, all of whom we thank for their
financial and technical support.
CR HASHIMOTO K, 2008, P 6 INT C INT SOC GE, P75
Hashimoto K, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-12, P1755, DOI 10.1109/IROS.2006.282213
Hashimoto K, 2008, P IEEE RAS-EMBS INT, P457, DOI 10.1109/BIOROB.2008.4762844
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hyon SH, 2007, IEEE T ROBOT, V23, P884, DOI 10.1109/TRO.2007.904896
Kim JY, 2007, ADV ROBOTICS, V21, P461, DOI 10.1163/156855307780132063
KRYCZKA P, 2009, P 27 ANN C ROB SOC J
Michel P., 2007, P IEEE RSJ INT C INT, P463
Michel P, 2006, IEEE INT CONF ROBOT, P3089, DOI 10.1109/ROBOT.2006.1642171
Morisawa M., 2006, P IEEE RAS INT C HUM, P581
Ogura Y., 2006, P IEEE RSJ INT C INT, P3976
Ogura Y, 2006, IEEE INT CONF ROBOT, P76
PARK J, 1999, P IEEE C SYST MAN CY, P960
Rebula J, 2007, IEEE-RAS INT C HUMAN, P65, DOI 10.1109/ICHR.2007.4813850
Sabe K, 2004, IEEE INT CONF ROBOT, P592, DOI 10.1109/ROBOT.2004.1307213
Sugahara Y, 2003, IEEE INT CONF ROBOT, P4342
Sugihara T., 2002, Proceedings IEEE/RSJ International Conference on Intelligent
Robots and Systems (Cat. No.02CH37332C), P2575, DOI 10.1109/IRDS.2002.1041658
Takanishi A., 1990, Proceedings. IROS '90. IEEE International Workshop on
Intelligent Robots and Systems '90. Towards a New Frontier of Applications (Cat.
No.90TH0332-7), P795, DOI 10.1109/IROS.1990.262498
Takeda Y., 2001, P CLAWAR2001 KARLSR, P1037
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
NR 20
TC 5
Z9 5
U1 0
U2 8
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2011
VL 25
IS 3-4
SI SI
BP 407
EP 426
DI 10.1163/016918610X552178
PG 20
WC Robotics
SC Robotics
GA 731UF
UT WOS:000288134200008
DA 2018-01-22
ER

PT J
AU Suleiman, W
Kanehiro, F
Miura, K
Yoshida, E
AF Suleiman, Wael
Kanehiro, Fumio
Miura, Kanako
Yoshida, Eiichi
TI Enhancing Zero Moment Point-Based Control Model: System Identification
Approach
SO ADVANCED ROBOTICS
LA English
DT Article
DE Humanoid robot; zero moment point control; system identification;
optimization; nonlinear system control
ID WALKING; ROBOT
AB The approximation of a humanoid robot by an inverted pendulum is one of the most
frequently used models to generate a stable walking pattern using a planned zero
moment point (ZMP) trajectory. However, on account of the difference between the
multibody model of the humanoid robot and the simple inverted pendulum model, the
ZMP error might be bigger than the polygon of support and the robot falls down. To
overcome this limitation, we propose to improve the accuracy of the inverted
pendulum model using system identification techniques. The candidate model is a
quadratic in the state space representation. To identify this system, we propose an
identification method that is the result of the comprehensive application of system
identification to dynamic systems. Based on the quadratic system, we also propose
controlling algorithms for on-line and off-line walking pattern generation for
humanoid robots. The efficiency of the quadratic system and the walking pattern
generation methods has been successfully shown using dynamical simulation and
conducting real experiments on the cybernetic human HRP-4C. (C) Koninklijke Brill
NV, Leiden and The Robotics Society of Japan, 2011
C1 [Suleiman, Wael; Kanehiro, Fumio; Miura, Kanako; Yoshida, Eiichi] Natl Inst Adv
Ind Sci & Technol, Intelligent Syst Res Inst, CNRS, AIST JRL,CRT UMI3218, Tsukuba,
Ibaraki 3058568, Japan.
RP Suleiman, W (reprint author), Natl Inst Adv Ind Sci & Technol, Intelligent Syst
Res Inst, CNRS, AIST JRL,CRT UMI3218, Tsukuba Cent 2,1-1-1 Umezono, Tsukuba,
Ibaraki 3058568, Japan.
EM wael.suleiman@aist.go.jp
RI Yoshida, Eiichi/M-3756-2016; Kanehiro, Fumio/L-8660-2016; Miura,
Kanako/K-4171-2012
OI Yoshida, Eiichi/0000-0002-3077-6964; Kanehiro,
Fumio/0000-0002-0277-3467;
FU Japan Society for the Promotion of Science [20-08816]
FX This research was partially supported by a Grant-in-Aid for Scientific
Research from the Japan Society for the Promotion of Science (20-08816).
CR Harada K, 2006, INT J HUM ROBOT, V3, P1, DOI 10.1142/S0219843606000643
KAGAMI S, 2000, P IEEE INT C HUM ROB, P306
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
KAJITA S, 1995, P IEEE INT C ROB AUT, V3, P2885
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Kaneko K, 2009, P IEEE RAS INT C HUM, P7, DOI DOI 10.1109/ICHR.2009.5379537
Kaneko K, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT
SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P2471, DOI 10.1109/IROS.2008.4650604
Nagasaka K, 2004, IEEE INT CONF ROBOT, P3189, DOI 10.1109/ROBOT.2004.1308745
Nagasaka K., 1999, P 17 ANN C ROB SOC J, P1193
NAKSUK N, 2004, P IEEE RAS RSJ INT C, V2, P576
Sugihara T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1404, DOI 10.1109/ROBOT.2002.1014740
SUGIHARA T, 2004, P 22 ANN C ROB SOC J
SULEIMAN W, 2006, P IEEE INT C CONTR A, P2565
SULEIMAN W, 2007, P C SYST CONTR MARR
Suleiman W, 2009, P IEEE RAS 9 INT C H, P74
Suleiman W, 2008, AUTOMATICA, V44, P488, DOI 10.1016/j.automatica.2007.06.007
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Yamaguchi J., 1993, P IEEE RSJ INT C INT, P561
1997, JAPANESE BODY DIMENS
NR 20
TC 8
Z9 8
U1 0
U2 3
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2011
VL 25
IS 3-4
SI SI
BP 427
EP 446
DI 10.1163/016918610X551773
PG 20
WC Robotics
SC Robotics
GA 731UF
UT WOS:000288134200009
DA 2018-01-22
ER

PT J
AU Kim, CH
Yonekura, K
Tsujino, H
Sugano, S
AF Kim, Chyon Hae
Yonekura, Kenta
Tsujino, Hiroshi
Sugano, Shigeki
TI Physical Control of the Rotation of a Flexible Object - Rope Turning
with a Humanoid Robot
SO ADVANCED ROBOTICS
LA English
DT Article
DE Humanoid robot; rope; energy transmission; rotational axis; centrifugal
force
AB Rope turning tasks are useful to explore rhythmic physical human-robot
interaction. However, in traditional studies, a robot was not able to turn a rope
by itself, because simultaneous control of three factors, i.e., energy
transmission, rotational axis and centrifugal force, is difficult when a robot
rotates a flexible object such as a rope. In this paper, we propose a method to
control these three factors simultaneously. We developed the method by adding a
compensator to an attractor that attracts the end-effector of a robot to a uniform
circular motion within a fixed radius. In a rope turning simulation, the end-
effector of a robot and the center of mass of a simplified rope converged to
uniform circular motions. In addition, we applied the proposed method to a rope
turning task performed by a humanoid robot. The robot was able to turn a rope with
one fixed end or in cooperation with a human. (C) Koninklijke Brill NV, Leiden and
The Robotics Society of Japan, 2011
C1 [Kim, Chyon Hae; Tsujino, Hiroshi] Honda Res Inst Japan Co Ltd, Wako, Saitama
3510188, Japan.
[Yonekura, Kenta; Sugano, Shigeki] Waseda Univ, Dept Modern Mech Engn, Sugano
Lab 59 321, Shinjuku Ku, Tokyo 1698555, Japan.
RP Kim, CH (reprint author), Honda Res Inst Japan Co Ltd, 8-1 Honcho, Wako, Saitama
3510188, Japan.
EM Tenkai@jp.honda-ri.com
RI Tsujino, Hiroshi/A-1198-2009
OI Tsujino, Hiroshi/0000-0001-8042-2796; Kim, Chyon Hae/0000-0002-7360-4986
CR ANDERSSON RL, 1998, ROBOT PING PONG PLAY
Baba Y., 2006, P SICE ICASE INT JOI, P2381
Chen S. J., 2007, P IEEE RSJ INT C INT, P2919
Chestnutt J., 2005, P IEEE INT C ROB AUT, P631
Gienger M., 2005, P IEEE RAS INT C HUM, P238
Iida F, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P2141, DOI 10.1109/IRDS.2002.1041584
Kim CH, 2009, P 9 IEEE RAS INT C H, P148
Kosuge K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P3459
Maeda Y, 2003, ADV ROBOTICS, V17, P67, DOI 10.1163/156855303321125631
Maeda Y., 2001, P IEEE ICRA, V4, P3477
Matsuno T, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-12, P2638, DOI 10.1109/IROS.2006.281944
Nakanishi J, 2000, IEEE T ROBOTIC AUTOM, V16, P109, DOI 10.1109/70.843166
Rizzi A. A., 1992, P IEEE INT C ROB AUT, V1, P775
Shibata T., 1994, Proceedings. From Perception to Action Conference, P78, DOI
10.1109/FPA.1994.636084
Suzuki T, 2000, 2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS, P127, DOI 10.1109/IROS.2000.894593
Williamson MM, 1998, NEURAL NETWORKS, V11, P1379, DOI 10.1016/S0893-
6080(98)00048-3
WILLIAMSON MM, 1998, P 14 EUR M CYB SYST
NR 17
TC 3
Z9 4
U1 0
U2 3
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2011
VL 25
IS 3-4
SI SI
BP 491
EP 506
DI 10.1163/016918610X551791
PG 16
WC Robotics
SC Robotics
GA 731UF
UT WOS:000288134200012
DA 2018-01-22
ER

PT J
AU Shiomi, M
Sakamoto, D
Kanda, T
Ishi, CT
Ishiguro, H
Hagita, N
AF Shiomi, Masahiro
Sakamoto, Daisuke
Kanda, Takayuki
Ishi, Carlos Toshinori
Ishiguro, Hiroshi
Hagita, Norihiro
TI Field Trial of a Networked Robot at a Train Station
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Networked robot; Semi-autonomous communication robot; Field trial
AB We developed a networked robot system in which ubiquitous sensors support robot
sensing and a human operator processes the robot's decisions during interaction. To
achieve semi-autonomous operation for a communication robot functioning in real
environments, we developed an operator-requesting mechanism that enables the robot
to detect situations that it cannot handle autonomously. Therefore, a human
operator helps by assuming control with minimum effort. The robot system consists
of a humanoid robot, floor sensors, cameras, and a sound-level meter. For helping
people in real environments, we implemented such basic communicative behaviors as
greetings and route guidance in the robot and conducted a field trial at a train
station to investigate the robot system's effectiveness. The results attest to the
high acceptability of the robot system in a public space and also show that the
operator-requesting mechanism correctly requested help in 84.7% of the necessary
situations; the operator only had to control 25% of the experiment time in the
semi-autonomous mode with a robot system that successfully guided 68% of the
visitors.
C1 [Shiomi, Masahiro; Kanda, Takayuki; Ishi, Carlos Toshinori; Ishiguro, Hiroshi;
Hagita, Norihiro] ATR IRC, Kyoto 6190288, Japan.
[Sakamoto, Daisuke] Japan Sci & Technol Agcy, ERATO Igarashi Design UI Project,
Bunkyo Ku, Tokyo 1120002, Japan.
RP Shiomi, M (reprint author), ATR IRC, Kyoto 6190288, Japan.
EM m-shiomi@atr.jp
RI Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825; SHIOMI,
Masahiro/0000-0003-4338-801X
FU Ministry of Internal Affairs and Communications of Japan
FX We wish to thank the staff of the Kinki Nippon Railway Co., Ltd. for
their cooperation. We also thank the following ATR members for their
helpful suggestions and cooperation: Satoshi Koizumi, Toshihiko Shibata,
Osamu Sugiyama, Kenta Nohara, and Anton Zolotkov. This research was
supported by the Ministry of Internal Affairs and Communications of
Japan.
CR Bauer A, 2009, INT J SOC ROBOT, V1, P127, DOI 10.1007/s12369-009-0011-9
Breazeal C, 2000, ADAPT BEHAV, V8, P49, DOI 10.1177/105971230000800104
Burgard W., 1998, NAT C ART INT, P11
Chen TL, 2010, 5 ACM IEEE INT C HUM, P243
Correa A, 2010, 5 ACM IEEE INT C HUM, P243
Dahlback D., 1993, KNOWL-BASED SYST, V6, p258~266
Dow S, 2005, IEEE PERVAS COMPUT, V4, P18, DOI 10.1109/MPRV.2005.93
Glas D. F., 2009, ACM IEEE 4 ANN C HUM, P149, DOI DOI 10.1145/1514095.1514123
Glas D. F., 2007, P IEEE RSJ INT C INT, P602, DOI DOI 10.1109/IROS.2007.4399383
Glas DF, 2009, 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, P846, DOI 10.1109/IROS.2009.5354198
Green A., 2004, P IEEE INT WORKSH RO
Gross H.M., 2008, INT C SYST MAN CYB
Ikemoto S., 2009, 18 IEEE INT S ROB HU
Ishi CT, 2006, INT C HUM ROB
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kawai H., 2004, P 5 ISCA SPEECH SYNT, P179
Kuzuoka H, 2010, 5 ACM IEEE INT C HUM, P285
Lee JK, 2009, ADV ROBOTICS, V23, P1925, DOI [10.1163/016918609\12518783330324,
10.1163/016918609X12518783330324]
Lee M. K., 2009, P HRI 09, P7, DOI DOI 10.1145/1514095.1514100
Murakita T, 2004, INT C PATT RECOG, P917, DOI 10.1109/ICPR.2004.1333922
Mutlu B., 2009, HUM ROB INT HRI 2009, V2, P69
Norman D., 2003, EMOTIONAL DESIGN
Ratwan RM, 2010, 5 ACM IEEE INT C HUM, P243
Sanfeliu A, 2008, NETWORK ROBOT SYSTEM
Satake S., 2009, HUM ROB INT HRI 2009, P109, DOI DOI 10.1145/1514095.1514117
SELLNER BP, 2006, HRI2006, P80
Shiomi M, 2008, INT S DISTR AUT ROB
Shiomi M, 2010, 5 ACM IEEE INT C HUM
Shiomi M, 2007, IEEE INT C HUM DEC
Shiomi M, 2008, INT J HUMAN ROBOT IJ
Shiwa T, 2009, INT J SOC ROBOT, V1, P141, DOI 10.1007/s12369-009-0012-8
Sian NE, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1651
Siegwart R, 2003, ROBOT AUTON SYST, V42, P203, DOI 10.1016/S0921-8890(02)00376-7
Takahashi T, 2009, P IEEE RSJ INT C INT
Woods S, 2006, NOVEL METHODOLOGICAL
Yamaoka F, 2007, IEEE RSJ INT C INT R, P2685
NR 36
TC 12
Z9 12
U1 0
U2 1
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD JAN
PY 2011
VL 3
IS 1
BP 27
EP 40
DI 10.1007/s12369-010-0077-4
PG 14
WC Robotics
SC Robotics
GA V31OU
UT WOS:000208893700003
DA 2018-01-22
ER

PT J
AU Sugimoto, N
Morimoto, J
AF Sugimoto, Norikazu
Morimoto, Jun
TI Switching multiple LQG controllers based on Bellman's optimality: Using
full-state feedback to control a humanoid robot
SO NEUROSCIENCE RESEARCH
LA English
DT Meeting Abstract
C1 [Sugimoto, Norikazu] NICT Brain ICT Lab, Kyoto, Japan.
[Morimoto, Jun] ATR, BICR, Kyoto, Japan.
NR 0
TC 0
Z9 0
U1 0
U2 0
PU ELSEVIER IRELAND LTD
PI CLARE
PA ELSEVIER HOUSE, BROOKVALE PLAZA, EAST PARK SHANNON, CO, CLARE, 00000,
IRELAND
SN 0168-0102
J9 NEUROSCI RES
JI Neurosci. Res.
PY 2011
VL 71
SU S
BP E409
EP E409
DI 10.1016/j.neures.2011.07.1794
PG 1
WC Neurosciences
SC Neurosciences & Neurology
GA 998DT
UT WOS:000308218102265
DA 2018-01-22
ER

PT J
AU Yoshikawa, T
AF Yoshikawa, Tsuneo
TI Multifingered robot hands: Control for grasping and manipulation
SO ANNUAL REVIEWS IN CONTROL
LA English
DT Review
DE Multifingered robot hand; Grasping; Manipulation; Impedance control;
Hybrid control; Soft-fingered hand
ID POSITION-FORCE CONTROL; IMPEDANCE CONTROL; HUMANOID ROBOT; SOFT FINGERS;
OBJECT; OPTIMIZATION; MECHANISMS; DYNAMICS; MODEL; TIPS
AB Robot hands have been one of the major research topics since the beginning of
robotics because grasping and manipulation of a variety of objects by robot hands
are fundamental functionalities of various robotic systems. This paper presents a
survey on the current state of research on control of grasping and manipulation by
multifingered robot hands. After a brief history of the hardware development of
multifingered robot hands, representative theoretical research results are
presented in the area of grasping and manipulation. Regarding grasping, basic
analytical concepts including force/form closures and active/passive closures are
explained and various grasp quality measures for grasping position optimization are
introduced. Regarding manipulation, the hybrid position/force control method and
impedance control method are presented. Some of our recent results on grasping and
manipulation by a soft-lingered hand are also presented. Finally, some future
research directions are discussed. (C) 2010 Elsevier Ltd. All rights reserved.
C1 Ritsumeikan Univ, Dept Human & Comp Intelligence, Shiga 5258577, Japan.
RP Yoshikawa, T (reprint author), Ritsumeikan Univ, Dept Human & Comp Intelligence,
Shiga 5258577, Japan.
EM yoshikawa@ci.ritsumei.ac.jp
CR AKELLA P, 1989, PROCEEDINGS - 1989 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOL 1-3, P764, DOI 10.1109/ROBOT.1989.100076
Arimoto S, 2000, ROBOTICA, V18, P71, DOI 10.1017/S0263574799002441
Biagiotti L, 2005, SPR TRA ADV ROBOT, V18, P55
Bicchi A, 2000, IEEE T ROBOTIC AUTOM, V16, P652, DOI 10.1109/70.897777
Buss M, 1996, IEEE T ROBOTIC AUTOM, V12, P406, DOI 10.1109/70.499823
FERRARI C, 1992, 1992 IEEE INTERNATIONAL CONF ON ROBOTICS AND AUTOMATION :
PROCEEDINGS, VOLS 1-3, P2290, DOI 10.1109/ROBOT.1992.219918
Gouaillier D., 2009, P IEEE INT C ROB AUT, P769, DOI DOI
10.1109/ROBOT.2009.5152516
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hirzinger G., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P46, DOI 10.1109/ROBOT.2000.844038
HOGAN N, 1985, J DYN SYST-T ASME, V107, P1, DOI 10.1115/1.3140702
HOSHINO K, 2005, J ROBOTICS MECHATRON, V17, P655
Ishida T, 2004, PROCEEDINGS OF THE 2004 INTERNATIONAL SYMPOSIUM ON MICRO-
NANOMECHATRONICS AND HUMAN SCIENCE, P23
Iwata H., 2009, P IEEE INT C ROB AUT, P580
Jacobsen S. C., 1986, Proceedings 1986 IEEE International Conference on Robotics
and Automation (Cat. No.86CH2282-2), P1520
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Kaneko K, 2007, IEEE INT CONF ROBOT, P913, DOI 10.1109/ROBOT.2007.363102
Kawasaki H, 2002, IEEE-ASME T MECH, V7, P296, DOI 10.1109/TMECH.2002.802720
KIRKPATRICK DG, 1990, PROCEEDINGS OF THE TWENTY SECOND ANNUAL ACM SYMPOSIUM ON
THEORY OF COMPUTING, P341, DOI 10.1145/100216.100261
LAURENTINI A, 1994, IEEE T PATTERN ANAL, V16, P150, DOI 10.1109/34.273735
LI ZI, 1988, IEEE T ROBOTIC AUTOM, V4, P32, DOI 10.1109/56.769
LI ZX, 1989, INT J ROBOT RES, V8, P33, DOI 10.1177/027836498900800402
Liu H, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT
SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P3692, DOI 10.1109/IROS.2008.4650624
Liu H., 1999, Proceedings 1999 IEEE/RSJ International Conference on Intelligent
Robots and Systems. Human and Environment Friendly Robots with High Intelligence
and Emotional Quotients (Cat. No.99CH36289), P106, DOI 10.1109/IROS.1999.812989
Liu T., 2009, P 2009 IEEE RSJ INT, P1691
Lovchik CS, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P907, DOI 10.1109/ROBOT.1999.772420
Mason MT, 1985, ROBOT HANDS MECH MAN
MILLER AT, 2000, P ASME INT MECH ENG, P1251
Murakami K, 2003, IEEE INT CONF ROBOT, P708
Murray RM, 1994, MATH INTRO ROBOTIC M
NAKAMURA Y, 1989, INT J ROBOT RES, V8, P44, DOI 10.1177/027836498900800204
OKADA T, 1979, IEEE T SYST MAN CYB, V9, P79, DOI 10.1109/TSMC.1979.4310152
PARK YC, 1992, INT J ROBOT RES, V11, P163, DOI 10.1177/027836499201100301
POLLARD NS, 1994, THESIS MIT CAMBRIDGE
RAIBERT MH, 1981, J DYN SYST-T ASME, V103, P126, DOI 10.1115/1.3139652
Sakagami Y, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2478, DOI 10.1109/IRDS.2002.1041641
SCHNEIDER SA, 1992, IEEE T ROBOTIC AUTOM, V8, P383, DOI 10.1109/70.143355
*SHADOW, 2010, SHAD DEXT HAND
Shimoga KB, 1996, INT J ROBOT RES, V15, P230, DOI 10.1177/027836499601500302
Shimoga KB, 1996, INT J ROBOT RES, V15, P320, DOI 10.1177/027836499601500402
SINHA PR, 1992, IEEE T ROBOTIC AUTOM, V8, P7, DOI 10.1109/70.127235
Strandberg M, 2006, IEEE T ROBOT, V22, P461, DOI 10.1109/TRO.2006.870665
SUGIYAMA S, 2009, P IEEE INT C ROB AUT, P2487
Ueki S., 2009, J ROBOTICS MECHATRON, V21, P36
Watanabe T, 2007, IEEE T AUTOM SCI ENG, V4, P52, DOI 10.1109/TASE.2006.873005
WILLIAMS D, 1993, PROCEEDINGS : IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-3, P1025, DOI 10.1109/ROBOT.1993.292110
Wimbock T, 2008, IEEE INT CONF ROBOT, P278, DOI 10.1109/ROBOT.2008.4543221
Xu J, 2007, IEEE INT CONF ROBOT, P211, DOI 10.1109/ROBOT.2007.363789
Xydas N, 1999, INT J ROBOT RES, V18, P941, DOI 10.1177/02783649922066673
YAMAZAKI Y, 2009, P IEEE INT C ROB BIO, P931
Yoshida M, 2008, IEEE INT CONF ROBOT, P1615
YOSHIKAWA T, 1993, INT J ROBOT RES, V12, P219, DOI 10.1177/027836499301200302
YOSHIKAWA T, 1987, IEEE J ROBOT AUTOM, V3, P386, DOI 10.1109/JRA.1987.1087120
YOSHIKAWA T, 1991, IEEE T ROBOTIC AUTOM, V7, P67, DOI 10.1109/70.68071
Yoshikawa T, 1999, J DYN SYST-T ASME, V121, P418, DOI 10.1115/1.2802490
Yoshikawa T., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P369, DOI 10.1109/ROBOT.2000.844084
Yoshikawa T, 1999, IEEE T ROBOTIC AUTOM, V15, P941, DOI 10.1109/70.795797
YOSHIKAWA T, 1988, IEEE T ROBOTIC AUTOM, V4, P699, DOI 10.1109/56.9307
Yoshikawa T., 2007, P IEEE INT C ADV ROB, P1165
Yoshikawa T., 1990, FDN ROBOTICS
Yoshikawa T, 2008, IEEE INT CONF ROBOT, P299
Zhao Y, 2009, IEEE T NEURAL NETWOR, V20, P758, DOI 10.1109/TNN.2008.2012127
NR 61
TC 57
Z9 57
U1 3
U2 56
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 1367-5788
J9 ANNU REV CONTROL
JI Annu. Rev. Control
PD DEC
PY 2010
VL 34
IS 2
BP 199
EP 208
DI 10.1016/j.arcontrol.2010.09.001
PG 10
WC Automation & Control Systems
SC Automation & Control Systems
GA 697CZ
UT WOS:000285491500002
DA 2018-01-22
ER

PT J
AU Lee, D
Ott, C
Nakamura, Y
AF Lee, Dongheui
Ott, Christian
Nakamura, Yoshihiko
TI Mimetic Communication Model with Compliant Physical Contact in
Human-Humanoid Interaction
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE Mimetic communication; physical human-robot interaction; imitation
learning; marker control; impedance control
ID HIDDEN MARKOV-MODELS; AUTONOMOUS ROBOTS; IMPEDANCE CONTROL; PREMOTOR
CORTEX; MIMESIS MODEL; RECOGNITION; IMITATION; PRIMITIVES; DYNAMICS;
TASKS
AB In this paper we aim at extending imitation learning to physical human-robot
interaction (pHRI) including contact transitions (from non-contact to contact and
vice versa). For interactive learning of pHRI, the paper raises four key issues:
(1) motion imitation, (2) understanding motion primitives, (3) understanding
interaction primitives, and (4) physical contact establishment. These issues are
solved by (1) marker control, (2) mimesis model, (3) mimetic communication model,
and (4) real-time motion reshaping and impedance control, respectively. The simple
human motion imitation is realized by a direct marker control method in which the
robot is virtually connected to the markers attached to the human via virtual
springs. Learning procedures are based on "imitation of a human" and "active
involvement" of the robot during the learning. The "imitation of a human" scheme
provides efficient learning. The "active involvement" scheme supports incremental
learning and it also enables to acquire sensory information for physical contacts.
By modifying the mimetic communication model proposed by Nakamura et al., we
achieve communication in physical domain as well as the symbolic domain. The
communication in the symbolic domain is realized through the concept of motion
primitives and interaction primitives. In the physical domain, the trajectory of
the motion primitive is reshaped in accordance with the human's motions in real-
time. Moreover, for performing compliant contact motion, an appropriate impedance
controller is integrated into the setting. All of the presented concepts are
applied to "high five"-like interaction tasks and evaluated in experiments with a
human-size humanoid robot.
C1 [Ott, Christian] German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234
Wessling, Germany.
[Lee, Dongheui] Tech Univ Munich, Dept Elect Engn & Informat Technol, Munich,
Germany.
[Nakamura, Yoshihiko] Univ Tokyo, Dept Mechanoinformat, Tokyo, Japan.
RP Ott, C (reprint author), German Aerosp Ctr DLR, Inst Robot & Mechatron, POB
1116, D-82234 Wessling, Germany.
EM christian.ott@dlr.de
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102
FU IRT Foundation to Support Man and Aging Society
FX The authors would like to thank Wataru Takano for his valuable
discussion and Akihiko Murai for his supports in the operation of the
motion capture system. This research is partly supported by Special
Coordination Funds for Promoting Science and Technology, "IRT Foundation
to Support Man and Aging Society".
CR Albu-Schaffer A, 2008, IEEE ROBOT AUTOM MAG, V15, P20, DOI
10.1109/MRA.2008.927979
Albu-Schaffer A, 2003, IEEE INT CONF ROBOT, P3704
Arnold V. I., 1989, MATH METHODS CLASSIC
ASFOUR T, 2006, IEEE RAS INT C HUM R, P40
Bicchi A, 2004, IEEE ROBOT AUTOM MAG, V11, P22, DOI 10.1109/MRA.2004.1310939
Billard A, 1999, ADAPT BEHAV, V7, P35, DOI 10.1177/105971239900700103
Billard A, 1998, ROBOT AUTON SYST, V24, P71, DOI 10.1016/S0921-8890(98)00023-2
Billard A., 2008, HDB ROBOTICS
Billard AG, 2006, ROBOT AUTON SYST, V54, P370, DOI 10.1016/j.robot.2006.01.007
Breazeal C, 2002, TRENDS COGN SCI, V6, P481, DOI 10.1016/S1364-6613(02)02016-8
CALINON S, 2009, IEEE INT C ADV ROB
Calinon S., 2007, IEEE INT C ROB HUM I, P702
Calinon S, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P367, DOI
10.1109/IROS.2008.4650593
Cheng G., 2006, IEEE RAS INT C HUM R
COX MF, 2001, MULTIDIMENSIONAL SCA
Dariush B, 2008, IEEE INT CONF ROBOT, P2677, DOI 10.1109/ROBOT.2008.4543616
DEMIRCAN E, 2008, ADV ROBOT KINEMATICS
DEMPSTER AP, 1977, J ROY STAT SOC B MET, V39, P1
Dillmann R, 2004, ROBOT AUTON SYST, V47, P109, DOI 10.1016/j.robot.2004.03.005
Dixon KR, 2004, INT J ROBOT RES, V23, P955, DOI 10.1177/0278364904044401
Donald M., 1991, ORIGINS MODERN MIND
Eggert DW, 1997, MACH VISION APPL, V9, P272, DOI 10.1007/s001380050048
Fod A, 2002, AUTON ROBOT, V12, P39, DOI 10.1023/A:1013254724861
Gallese V, 1996, BRAIN, V119, P593, DOI 10.1093/brain/119.2.593
Harville D. A., 1997, MATRIX ALGEBRA STAT
Hirzinger G., 2002, IEEE INT C ROB AUT, P1710
HOGAN N, 1985, J DYN SYST-T ASME, V107, P1, DOI 10.1115/1.3140702
HORN BKP, 1987, J OPT SOC AM A, V4, P629, DOI 10.1364/JOSAA.4.000629
HUNT KH, 1975, J APPL MECH-T ASME, V42, P440, DOI 10.1115/1.3423596
Ijspeert A. J., 2002, IEEE INT C ROB AUT, P1398
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
INAMURA T, 2003, IEEE RSJ INT C HUM R
Janus B., 2005, 12 IEEE INT C ADV RO, P411
Kanehiro F, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P660, DOI
10.1109/IROS.2008.4650950
Khatib O., 1987, IEEE J ROBOTIC AUTOM, V3, P1115
Kosuge K, 2004, IEEE ROBIO 2004: PROCEEDINGS OF THE IEEE INTERNATIONAL
CONFERENCE ON ROBOTICS AND BIOMIMETICS, P8
Kuffner J, 2001, IEEE INT CONF ROBOT, P692, DOI 10.1109/ROBOT.2001.932631
KULIC D, 2007, IEEE INT S ROB HUM I
KULIC D, 2007, IEEE RSJ INT C INT R
Kulic D, 2008, IEEE INT CONF ROBOT, P2591, DOI 10.1109/ROBOT.2008.4543603
KUNORI H, 2009, IEEE RSJ INT C INT R, P5240
Lee D., 2009, IEEE INT C ROB AUT, P1535
LEE D, 2005, IEEE RSJ INT C INT R, P1911
Lee D, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT
SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P2867, DOI 10.1109/IROS.2008.4650611
Lee D, 2008, IEEE INT CONF ROBOT, P1722, DOI 10.1109/ROBOT.2008.4543449
Lee D, 2010, INT J ROBOT RES, V29, P60, DOI 10.1177/0278364909342282
MARHEFKA DW, 1999, IEEE T SYST MAN CY A, V29, P565
Murray RM, 1994, MATH INTRO ROBOTIC M
Nagai Y, 2003, CONNECT SCI, V15, P211, DOI 10.1080/09540090310001655101
Nakamura Y, 2000, IEEE T ROBOTIC AUTOM, V16, P124, DOI 10.1109/70.843167
NAKAMURA Y, 2005, INT S ROB RES, P12
Nakaoka S., 2005, IEEE RSJ INT C INT R, P2769
OKADA M, 2002, IEEE INT C ROB AUT, P1410
Ott Ch, 2006, IEEE RAS INT C HUM R, P276
Ott Ch., 2008, IEEE RAS INT C HUM R, P399
PETERS J, 2003, REINFORCEMENT LEARNI
POLLARD N, 2002, IEEE INT C ROB AUT, P1390
RABINER LR, 1989, P IEEE, V77, P257, DOI 10.1109/5.18626
Rizzolatti G, 2004, ANNU REV NEUROSCI, V27, P169, DOI
10.1146/annurev.neuro.27.070203.144230
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
RIZZOLATTI G, 1998, TREND NEUROSCIENCES, V21, P151
SANTIS AD, 2008, MECH MACH THEORY, V43
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Schaal S, 2003, PHILOS T R SOC B, V358, P537, DOI 10.1098/rstb.2002.1258
Shibata T, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P2868, DOI 10.1109/ROBOT.1999.774032
Sumioka H., 2008, J ROBOT MECHATRON, V20, P378
Takano W., 2006, IEEE RAS INT C HUM R, P425
Takano W, 2006, IEEE INT CONF ROBOT, P3602, DOI 10.1109/ROBOT.2006.1642252
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
Zinn M, 2004, IEEE ROBOT AUTOM MAG, V11, P12, DOI 10.1109/MRA.2004.1310938
NR 70
TC 23
Z9 23
U1 1
U2 7
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD NOV
PY 2010
VL 29
IS 13
BP 1684
EP 1704
DI 10.1177/0278364910364164
PG 21
WC Robotics
SC Robotics
GA 675RB
UT WOS:000283847100006
DA 2018-01-22
ER

PT J
AU Harada, K
Hattori, S
Hirukawa, H
Morisawa, M
Kajita, S
Yoshida, E
AF Harada, Kensuke
Hattori, Shizuko
Hirukawa, Hirohisa
Morisawa, Mitsuharu
Kajita, Shuuji
Yoshida, Eiichi
TI Two-Stage Time-Parametrized Gait Planning for Humanoid Robots
SO IEEE-ASME TRANSACTIONS ON MECHATRONICS
LA English
DT Article
DE Biped gait; humanoid robot; motion planning
ID WALKING; AVOIDANCE; OBSTACLE
AB In this paper, we propose a framework on planning collision-free walking motion
for biped humanoid robots. Our proposed planner is composed of two phases. In the
first phase, the constraint condition is generated as a function of time by using
the walking pattern generator. In the second phase, the collision-free walking
motion is planned. To generate the collision-free motion, we add a time parameter
to each milestone of the single-query, bidirectional, lazy collision checking
planner in order to explicitly consider the time-parametrized constraint
conditions. Further, we smoothen the generated path by using B-spline
interpolation. Through experimental results, we show that our planner is effective
in realizing collision-free walking motion by real humanoid robots.
C1 [Harada, Kensuke; Hattori, Shizuko; Hirukawa, Hirohisa; Morisawa, Mitsuharu;
Kajita, Shuuji; Yoshida, Eiichi] Natl Inst Adv Ind Sci & Technol, Intelligent Syst
Res Inst, Humanoid Res Grp, Tsukuba, Ibaraki 3058568, Japan.
RP Harada, K (reprint author), Natl Inst Adv Ind Sci & Technol, Intelligent Syst
Res Inst, Humanoid Res Grp, Tsukuba, Ibaraki 3058568, Japan.
EM kensuke.harada@aist.go.jp
RI Yoshida, Eiichi/M-3756-2016; Morisawa, Mitsuharu/M-3327-2016; Kajita,
Shuuji/M-5010-2016
OI Yoshida, Eiichi/0000-0002-3077-6964; Morisawa,
Mitsuharu/0000-0003-0056-4335; Kajita, Shuuji/0000-0001-8188-2209
CR CHESTNUTT J, 2005, P IEEE INT C ROB AUT, P3618
Cortes J, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P2141, DOI 10.1109/ROBOT.2002.1014856
Franchi A, 2009, IEEE-ASME T MECH, V14, P163, DOI 10.1109/TMECH.2009.2013617
Harada K., 2007, P IEEE RSJ INT C INT, P4227
Harada K., 2004, P IEEE RAS INT C HUM, P640
Harada K, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT
SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P1544, DOI 10.1109/IROS.2008.4650862
Harada K, 2007, IEEE-ASME T MECH, V12, P53, DOI 10.1109/TMECH.2006.886254
HAUSER K, 2006, P WORKSH ALG FDN ROB, P507
Hauser K., 2005, P IEEE RAS INT C HUM, P7
Hirukawa H, 2006, IEEE INT CONF ROBOT, P1976, DOI 10.1109/ROBOT.2006.1641995
Hirukawa H, 2007, IEEE INT CONF ROBOT, P2181, DOI 10.1109/ROBOT.2007.363644
KAGAMI S, 2000, P IEEE INT C HUM ROB, P190
KAGAMI S, 2010, INT WORKSH ALG FDN R
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
Kajita S, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1644
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kanehiro F, 2008, 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND
INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, P660, DOI
10.1109/IROS.2008.4650950
Kavraki LE, 1996, IEEE T ROBOTIC AUTOM, V12, P566, DOI 10.1109/70.508439
KHATIB O, 1986, INT J ROBOT RES, V5, P90, DOI 10.1177/027836498600500106
Kindel R., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P537, DOI 10.1109/ROBOT.2000.844109
Kuffner J, 2001, IEEE INT CONF ROBOT, P692, DOI 10.1109/ROBOT.2001.932631
KUFFNER J, 2003, P INT S ROB RES, P4420
Kuffner J. J. Jr., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P995, DOI 10.1109/ROBOT.2000.844730
LaValle SM, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P1671, DOI 10.1109/ROBOT.1999.770349
LaValle S. M., 2006, PLANNING ALGORITHMS
LIU G, 2006, P 10 INT S ADV ROB K, P201
Nagasaka K, 2004, IEEE INT CONF ROBOT, P3189, DOI 10.1109/ROBOT.2004.1308745
SANADA H, 2007, P IEEE RSJ INT C INT, P4028
SANCHEZ G, P INT S ROB RES ISRR, P403
Sentis L, 2006, IEEE INT CONF ROBOT, P2641, DOI 10.1109/ROBOT.2006.1642100
Stasse O, 2008, IEEE INT CONF ROBOT, P3200, DOI 10.1109/ROBOT.2008.4543698
Stilman M, 2007, ADV ROBOTICS, V21, P1617, DOI 10.1163/156855307782227408
SUGIHARA T, P IEEE INT C ROB AUT, P306
TAKANISHI A, P IEEE INT WORKSH IN, P323
Yoshida E., 2005, P 2005 IEEE RAS INT, P1
Yoshida E., 2005, P 2005 IEEE RSJ INT, P25
MOTION PLANNING KIT
NR 37
TC 17
Z9 19
U1 0
U2 5
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1083-4435
EI 1941-014X
J9 IEEE-ASME T MECH
JI IEEE-ASME Trans. Mechatron.
PD OCT
PY 2010
VL 15
IS 5
BP 694
EP 703
DI 10.1109/TMECH.2009.2032180
PG 10
WC Automation & Control Systems; Engineering, Manufacturing; Engineering,
Electrical & Electronic; Engineering, Mechanical
SC Automation & Control Systems; Engineering
GA 692HA
UT WOS:000285142700004
DA 2018-01-22
ER

PT J
AU Takumi, T
AF Takumi, Toru
TI A humanoid mouse model of autism
SO BRAIN & DEVELOPMENT
LA English
DT Article; Proceedings Paper
CT 12th Annual Meeting of the Infantile-Seizure-Society
CY MAY 09-10, 2009
CL Kurume, JAPAN
SP Infantile Seizure Soc
DE Autism; Chromosome abnormality; Mouse model; Social behavior; Serotonin
ID SPECTRUM DISORDERS; DEVELOPMENTAL NEUROBIOLOGY; SOCIAL-INTERACTION;
ABNORMALITIES; GENETICS; RECEPTOR; BRAIN; MICE; DUPLICATION; SEROTONIN
AB Even now fruit of the human genome project is available, we have difficulties to
approach neuropsychiatric disorders at the molecular level. Autism is a complex
psychiatric illness but has received considerable attention as a developmental
brain disorder not only from basic researchers but also from society. Substantial
evidence suggests that chromosomal abnormalities contribute to autism risk. The
duplication of human chromosome 15q11-13 is known to be the most frequent
cytogenetic abnormality in autism. We succeeded to generate mice with a 6.3-Mb-wide
interstitial duplication in mouse chromosome 7c that is highly syntenic to human
15q11-13 by using a Cre-loxP-based chromosome-engineering technique. The only
paternally duplicated mice display autistic behavioral features such as poor social
interaction and stereotypical behavior, and exhibit a developmental abnormality in
ultrasonic vocalizations as well as anxiety. The detailed analysis focusing on a
non-coding small nucleolar RNA, MBII52, within the duplicated region, revealed that
the paternally duplicated mice alter the editing ratio of serotonin (5-HT) 2c
receptor pre-mRNA and intracellular calcium responses by a 5-HT2c receptor specific
agonist are changed in neurons. This result may explain one of molecular mechanisms
of abnormal behaviors in the paternal duplicated mice. The first chromosome-
engineered mouse model for human chromosome 15q11-13 duplication fulfills not only
face validity of human autistic phenotypes but also construct validity based on
human chromosome abnormality. This model will be a founder mouse for forward
genetics of autistic disease and an invaluable tool for its therapeutic
development. (C) 2010 Elsevier B.V. All rights reserved.
C1 [Takumi, Toru] Hiroshima Univ, Grad Sch Biomed Sci, Lab Integrat Biosci,
Hiroshima 7348553, Japan.
[Takumi, Toru] Japan Sci & Technol Agcy, Core Res Evolut Sci & Technol, Tokyo,
Japan.
RP Takumi, T (reprint author), Hiroshima Univ, Grad Sch Biomed Sci, Lab Integrat
Biosci, Hiroshima 7348553, Japan.
EM takumi@hiroshima-u.ac.jp
CR Belmonte MK, 2004, MOL PSYCHIATR, V9, P646, DOI 10.1038/sj.mp.4001499
Bolton PF, 2004, PSYCHIAT GENET, V14, P131, DOI 10.1097/00041444-200409000-00002
Crawley JN, 2004, MENT RETARD DEV D R, V10, P248, DOI 10.1002/mrdd.20039
Cryan JF, 2000, J PHARMACOL EXP THER, V295, P1120
Di Giovanni G, 2006, CURR TOP MED CHEM, V6, P1909
DiCicco-Bloom E, 2006, J NEUROSCI, V26, P6897, DOI 10.1523/JNEUROSCI.1712-
06.2006
Dykens EM, 2004, MENT RETARD DEV D R, V10, P284, DOI 10.1002/mrdd.20042
Folstein SE, 2001, NAT REV GENET, V2, P943, DOI 10.1038/35103559
Geschwind DH, 2007, CURR OPIN NEUROBIOL, V17, P103, DOI
10.1016/j.conb.2007.01.009
Kayashima T, 2003, J HUM GENET, V48, P492, DOI 10.1007/s10038-003-0061-z
Kishore S, 2006, SCIENCE, V311, P230, DOI 10.1126/science.1118265
Kwon CH, 2006, NEURON, V50, P377, DOI 10.1016/j.neuron.2006.03.023
Lee JA, 2006, NEURON, V52, P103, DOI 10.1016/j.neuron.2006.09.027
Levitt P, 2004, TRENDS NEUROSCI, V27, P400, DOI 10.1016/j.tins.2004.05.008
Lijam N, 1997, CELL, V90, P895, DOI 10.1016/S0092-8674(00)80354-2
Lord C, 2000, NEURON, V28, P355, DOI 10.1016/S0896-6273(00)00115-X
Maestrini E, 2000, NEURON, V28, P19, DOI 10.1016/S0896-6273(00)00081-7
Mao R, 2000, GENET MED, V2, P131, DOI 10.1097/00125817-200003000-00003
McDougle CJ, 2005, J CLIN PSYCHIAT, V66, P9
Mohandas TK, 1999, AM J MED GENET, V82, P294, DOI 10.1002/(SICI)1096-
8628(19990212)82:4<294::AID-AJMG4>3.0.CO;2-U
Moy SS, 2006, AM J MED GENET C, V142C, P40, DOI 10.1002/ajmg.c.30081
Murcia CL, 2005, INT J DEV NEUROSCI, V23, P265, DOI
10.1016/j.ijdevneu.2004.07.001
Nicholls RD, 2001, ANNU REV GENOM HUM G, V2, P153, DOI
10.1146/annurev.genom.2.1.153
Persico AM, 2006, TRENDS NEUROSCI, V29, P349, DOI 10.1016/j.tins.2006.05.010
Polleux F, 2004, MENT RETARD DEV D R, V10, P303, DOI 10.1002/mrdd.20044
Roberts SE, 2002, HUM GENET, V110, P227, DOI 10.1007/s00439-001-0678-6
Rubenstein JLR, 2003, GENES BRAIN BEHAV, V2, P255, DOI 10.1046/j.1601-
183X.2003.00037.x
Sadock BJ, 2003, KAPLAN SADOCKS SYNOP
Santarelli L, 2003, SCIENCE, V301, P805, DOI 10.1126/science.1083328
Schmauss C, 2005, INT REV NEUROBIOL, V63, P83, DOI 10.1016/S0074-7742(05)63004-8
Sebat J, 2007, SCIENCE, V316, P445, DOI 10.1126/science.1138659
Seeburg PH, 2002, NEURON, V35, P17, DOI 10.1016/S0896-6273(02)00760-2
Tabuchi K, 2007, SCIENCE, V318, P71, DOI 10.1126/science.1146221
Veenstra-VanderWeele J, 2004, ANNU REV GENOM HUM G, V5, P379, DOI
10.1146/annurev.genom5.061903.180050
Veenstra-VanderWeele J, 2004, MOL PSYCHIATR, V9, P819, DOI 10.1038/sj.mp.4001505
Veltman MWM, 2005, J AUTISM DEV DISORD, V35, P117, DOI 10.1007/s10803-004-1039-1
Vorstman JAS, 2006, MOL PSYCHIATR, V11, P18, DOI 10.1038/sj.mp.4001757
Whitaker-Azmitia PM, 2005, INT J DEV NEUROSCI, V23, P75, DOI
10.1016/j.ijdevneu.2004.07.022
WINSLOW JT, 1990, J PHARMACOL EXP THER, V254, P212
Winslow JT, 2002, NEUROPEPTIDES, V36, P221, DOI 10.1054/npep.2002.0909
Young LJ, 2007, NEURON, V54, P353, DOI 10.1016/j.neuron.2007.04.011
NR 41
TC 10
Z9 13
U1 0
U2 6
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0387-7604
J9 BRAIN DEV-JPN
JI Brain Dev.
PD OCT
PY 2010
VL 32
IS 9
BP 753
EP 758
DI 10.1016/j.braindev.2010.05.001
PG 6
WC Clinical Neurology
SC Neurosciences & Neurology
GA 651ID
UT WOS:000281920500008
PM 20542394
DA 2018-01-22
ER

PT J
AU Ude, A
Gams, A
Asfour, T
Morimoto, J
AF Ude, Ales
Gams, Andrej
Asfour, Tamim
Morimoto, Jun
TI Task-Specific Generalization of Discrete and Periodic Dynamic Movement
Primitives
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Active vision on humanoid robots; humanoid robots; imitation learning;
learning and adaptive systems
ID HUMANOID ROBOTS; HUMAN-PERFORMANCE; MODEL; IMITATION; MOTION; VISION
AB Acquisition of new sensorimotor knowledge by imitation is a promising paradigm
for robot learning. To be effective, action learning should not be limited to
direct replication of movements obtained during training but must also enable the
generation of actions in situations a robot has never encountered before. This
paper describes a methodology that enables the generalization of the available
sensorimotor knowledge. New actions are synthesized by the application of
statistical methods, where the goal and other characteristics of an action are
utilized as queries to create a suitable control policy, taking into account the
current state of the world. Nonlinear dynamic systems are employed as a motor
representation. The proposed approach enables the generation of a wide range of
policies without requiring an expert to modify the underlying representations to
account for different task-specific features and perceptual feedback. The paper
also demonstrates that the proposed methodology can be integrated with an active
vision system of a humanoid robot. 3-D vision data are used to provide query points
for statistical generalization. While 3-D vision on humanoid robots with complex
oculomotor systems is often difficult due to the modeling uncertainties, we show
that these uncertainties can be accounted for by the proposed approach.
C1 [Ude, Ales] Jozef Stefan Inst, Dept Automat Biocybernet & Robot, Ljubljana 1000,
Slovenia.
[Ude, Ales; Morimoto, Jun] Adv Telecommun Res Inst Int, Computat Neurosci Labs,
Kyoto 6190288, Japan.
[Asfour, Tamim] Karlsruhe Inst Technol, Inst Anthropomat, D-76131 Karlsruhe,
Germany.
RP Ude, A (reprint author), Jozef Stefan Inst, Dept Automat Biocybernet & Robot,
Ljubljana 1000, Slovenia.
EM ales.ude@ijs.si; andrej.gams@ijs.si; asfour@kit.edu; xmorimo@atr.jp
OI Gams, Andrej/0000-0002-9803-3593; Ude, Ales/0000-0003-3677-3972
FU European Union [FP6-2004-IST-4-027657]; National Institute of
Information and Communications Technology within the JAPAN TRUST
FX This work was supported in part by the European Union Cognitive Systems
Project PACO-PLUS under Grant FP6-2004-IST-4-027657. The first author
(A. Ude) was also supported by the National Institute of Information and
Communications Technology within the JAPAN TRUST International Research
Cooperation Program.
CR Aboaf E. W., 1988, Proceedings of the 1988 IEEE International Conference on
Robotics and Automation (Cat. No.88CH2555-1), P1309, DOI 10.1109/ROBOT.1988.12245
Albu-Schaffer A, 2007, INT J ROBOT RES, V26, P23, DOI 10.1177/0278364907073776
Asfour T, 2008, INT J HUM ROBOT, V5, P183, DOI 10.1142/S0219843608001431
Atkeson CG, 1997, ARTIF INTELL REV, V11, P11, DOI 10.1023/A:1006559212014
Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
Billard AG, 2006, ROBOT AUTON SYST, V54, P370, DOI 10.1016/j.robot.2006.01.007
Calinon S, 2007, IEEE T SYST MAN CY B, V37, P286, DOI 10.1109/TSMCB.2006.886952
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
D'Souza A, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P298, DOI
10.1109/IROS.2001.973374
Dillmann R, 2004, ROBOT AUTON SYST, V47, P109, DOI 10.1016/j.robot.2004.03.005
FLASH T, 1985, J NEUROSCI, V5, P1688
Gams A, 2009, AUTON ROBOT, V27, P3, DOI 10.1007/s10514-009-9118-y
Guenter F, 2007, ADV ROBOTICS, V21, P1521
Haruno M, 2001, NEURAL COMPUT, V13, P2201, DOI 10.1162/089976601750541778
Hersch M, 2008, IEEE T ROBOT, V24, P1463, DOI 10.1109/TRO.2008.2006703
Hoffman J. D., 2001, NUMERICAL METHODS EN
Hyon SH, 2007, IEEE T ROBOT, V23, P884, DOI 10.1109/TRO.2007.904896
Ijspeert AJ, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P958, DOI 10.1109/IRDS.2002.1041514
Ijspeert AJ, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS I-IV, PROCEEDINGS, P1398, DOI 10.1109/ROBOT.2002.1014739
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Kaiser M, 1996, IEEE INT CONF ROBOT, P2700, DOI 10.1109/ROBOT.1996.506570
KANG SB, 1995, IEEE T ROBOTIC AUTOM, V11, P670, DOI 10.1109/70.466599
KOBER J, 2008, P IEEE RSJ INT C INT, P834
Kulic D, 2008, INT J ROBOT RES, V27, P761, DOI 10.1177/0278364908091153
Kulic D, 2009, IEEE T ROBOT, V25, P1158, DOI 10.1109/TRO.2009.2026508
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
MISTRY M, 2005, P IEEE RSJ INT C INT, P4071
Miyamoto H, 1996, NEURAL NETWORKS, V9, P1281, DOI 10.1016/S0893-6080(96)00043-3
Moeslund TB, 2006, COMPUT VIS IMAGE UND, V104, P90, DOI
10.1016/j.cviu.2006.08.002
Mussa-Ivaldi FA, 2000, PHILOS T R SOC B, V355, P1755, DOI 10.1098/rstb.2000.0733
Nguyen-Tuong D, 2009, ADV ROBOTICS, V23, P2015, DOI
10.1163/016918609X12529286896877
OZTOP E, 2008, LECT NOTES COMPUTER, P214
PARK DH, 2008, P IEEE INT C HUM ROB, P91
Pastor P, 2009, P IEEE INT C ROB AUT, P763
PETERS J, 2009, INT S ROB RES LUC SW
Peters J, 2008, NEURAL NETWORKS, V21, P682, DOI 10.1016/j.neunet.2008.02.003
Peters J, 2008, INT J ROBOT RES, V27, P197, DOI 10.1177/0278364907087548
Poggio T, 2004, NATURE, V431, P768, DOI 10.1038/nature03014
Pollard NS, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1390, DOI 10.1109/ROBOT.2002.1014737
RAIBERT MH, 1978, BIOL CYBERN, V29, P29, DOI 10.1007/BF00365233
Rasmussen CE, 2005, ADAPT COMPUT MACH LE, P1
Righetti L, 2006, PHYSICA D, V216, P269, DOI [10.1016/j.physd.2006.02.009,
10.1016/j.phvsd.2006.02.009]
RILEY M, 2000, P 2000 WORKSH INT RO, P35
RILEY M, 2006, P IEEE RAS INT C HUM, P567
Ruchanurucks M, 2006, IEEE INT CONF ROBOT, P2649, DOI 10.1109/ROBOT.2006.1642101
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Schaal S, 1998, NEURAL COMPUT, V10, P2047, DOI 10.1162/089976698300016963
Schaal S, 2007, PROG BRAIN RES, V165, P425, DOI 10.1016/S0079-6123(06)65027-9
Siciliano B, 2000, MODELING CONTROL ROB
Scott D., 1992, MULTIVARIATE DENSITY
Ude A, 2004, ROBOT AUTON SYST, V47, P93, DOI 10.1016/j.robot.2004.03.004
UDE A, 2009, 14 INT C ADV ROB MUN
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
YAMANE K, 2009, ROB SCI SYST C SEATT
NR 54
TC 113
Z9 118
U1 2
U2 17
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD OCT
PY 2010
VL 26
IS 5
BP 800
EP 815
DI 10.1109/TRO.2010.2065430
PG 16
WC Robotics
SC Robotics
GA 669TP
UT WOS:000283373400003
DA 2018-01-22
ER

PT J
AU Cangelosi, A
Metta, G
Sagerer, G
Nolfi, S
Nehaniv, C
Fischer, K
Tani, J
Belpaeme, T
Sandini, G
Nori, F
Fadiga, L
Wrede, B
Rohlfing, K
Tuci, E
Dautenhahn, K
Saunders, J
Zeschel, A
AF Cangelosi, Angelo
Metta, Giorgio
Sagerer, Gerhard
Nolfi, Stefano
Nehaniv, Chrystopher
Fischer, Kerstin
Tani, Jun
Belpaeme, Tony
Sandini, Giulio
Nori, Francesco
Fadiga, Luciano
Wrede, Britta
Rohlfing, Katharina
Tuci, Elio
Dautenhahn, Kerstin
Saunders, Joe
Zeschel, Arne
TI Integration of Action and Language Knowledge: A Roadmap for
Developmental Robotics
SO IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT
LA English
DT Article
DE Action learning; humanoid robot; language development; roadmap; social
learning
ID AUTONOMOUS MENTAL-DEVELOPMENT; VENTRAL PREMOTOR CORTEX; CHILD-DIRECTED
SPEECH; NEURAL-NETWORKS; MIRROR NEURONS; INTERSENSORY REDUNDANCY;
COGNITIVE-DEVELOPMENT; SYNTACTIC DEVELOPMENT; SEMANTIC SIMILARITY;
GROUNDING LANGUAGE
AB This position paper proposes that the study of embodied cognitive agents, such
as humanoid robots, can advance our understanding of the cognitive development of
complex sensorimotor, linguistic, and social learning skills. This in turn will
benefit the design of cognitive robots capable of learning to handle and manipulate
objects and tools autonomously, to cooperate and communicate with other robots and
humans, and to adapt their abilities to changing internal, environmental, and
social conditions. Four key areas of research challenges are discussed,
specifically for the issues related to the understanding of: 1) how agents learn
and represent compositional actions; 2) how agents learn and represent
compositional lexica; 3) the dynamics of social interaction and learning; and 4)
how compositional action and language representations are integrated to bootstrap
the cognitive system. The review of specific issues and progress in these areas is
then translated into a practical roadmap based on a series of milestones. These
milestones provide a possible set of cognitive robotics goals and test scenarios,
thus acting as a research roadmap for future work on cognitive developmental
robotics.
C1 [Cangelosi, Angelo; Belpaeme, Tony] Univ Plymouth, Ctr Robot & Neural Syst,
Plymouth PL4 8AA, Devon, England.
[Metta, Giorgio; Sandini, Giulio; Nori, Francesco; Fadiga, Luciano] Italian Inst
Technol Robot Brain & Cognit Sci, I-16163 Genoa, Italy.
[Sagerer, Gerhard; Wrede, Britta; Rohlfing, Katharina] Univ Bielefeld, Appl Comp
Sci Grp, D-33611 Bielefeld, Germany.
[Nolfi, Stefano; Tuci, Elio] CNR, Inst Cognit Sci & Technol, I-00185 Rome,
Italy.
[Nehaniv, Chrystopher; Dautenhahn, Kerstin; Saunders, Joe] Univ Hertfordshire,
Adapt Syst Res Grp, Hatfield AL10 9AB, Herts, England.
[Fischer, Kerstin; Zeschel, Arne] Univ So Denmark, Dept Business Commun &
Informat Sci, DK-5230 Odense, Denmark.
[Tani, Jun] RIKEN, Brains Sci Inst, Wako, Saitama 3510198, Japan.
[Fadiga, Luciano] Univ Ferrara, Dept Human Physiol, I-44100 Ferrara, Italy.
RP Cangelosi, A (reprint author), Univ Plymouth, Ctr Robot & Neural Syst, Plymouth
PL4 8AA, Devon, England.
EM acan-gelosi@plymouth.ac.uk; giorgio.metta@iit.it;
sagerer@techfak.uni-bielefeld.de; stefano.nolfi@istc.crn.it;
c.l.nehaniv@herts.ac.uk; kerstin@sitkom.sdu.dk; tani@brain.riken.jp;
tbelpaeme@plymouth.ac.uk; giulio.sandin@iit.iti; francesco.nori@iit.it;
fdl@unife.it; bwrede@techfak.uni-bielefeld.de;
rohlfing@techfak.uni-bielefeld.de; elio.tuci@istc.crn.it;
k.dautenhahn@herts.ac.uk; j.2.saunders@herts.ac.uk;
zeschel@sitkom.sdu.dk
RI Nori, Francesco/J-3416-2012; Belpaeme, Tony/A-3582-2014
OI Nori, Francesco/0000-0003-3763-6873; fadiga,
luciano/0000-0001-5691-5080; Dautenhahn, Kerstin/0000-0002-9263-3897;
Cangelosi, Angelo/0000-0002-4709-2243
FU EU [214886]
FX This work was supported by the EU Integrating Project "ITALK" (214886),
within the FP7 ICT program "Cognitive Systems, Interaction, and
Robotics."
CR Abbot-Smith K, 2008, COGNITIVE DEV, V23, P48, DOI 10.1016/j.cogdev.2007.11.002
Abbot-Smith K, 2006, LINGUIST REV, V23, P275, DOI 10.1515/TLR.2006.011
ADRIAANS P, 2001, ALGEBRAS DIAGRAMS DE
Akhtar N, 1999, J CHILD LANG, V26, P339, DOI 10.1017/S030500099900375X
Alishahi A, 2008, COGNITIVE SCI, V32, P789, DOI 10.1080/03640210801929287
Andrews M, 2009, PSYCHOL REV, V116, P463, DOI 10.1037/a0016261
Arbib MA, 2000, NEURAL NETWORKS, V13, P975, DOI 10.1016/S0893-6080(00)00070-8
AUVRAY M, 2006, P 10 M ASS SCI STUD
Bahrick LE, 2004, CURR DIR PSYCHOL SCI, V13, P99, DOI 10.1111/j.0963-
7214.2004.00283.x
Balaban MT, 1997, J EXP CHILD PSYCHOL, V64, P3, DOI 10.1006/jecp.1996.2332
Barsalou LW, 1999, BEHAV BRAIN SCI, V22, P577
BATALI J, 2002, LINGUISTIC EVOLUTION
Beer RD, 2003, ADAPT BEHAV, V11, P209, DOI 10.1177/1059712303114001
Behne T, 2005, DEVELOPMENTAL SCI, V8, P492, DOI 10.1111/j.1467-7687.2005.00440.x
Behne T, 2005, DEV PSYCHOL, V41, P328, DOI 10.1037/0012-1649.41.2.328
Belpaeme T, 2005, ADAPT BEHAV, V13, P293, DOI 10.1177/105971230501300404
Belpaeme T, 2007, INTERACT STUD, V8, P1, DOI 10.1075/is.8.1.02bel
Bengio Y., 2007, LARGE SCALE KERNEL M
BERGEN B, 2005, CONSTRUCTION GRAMMAR
Bigelow AE, 1999, INFANT BEHAV DEV, V22, P367, DOI 10.1016/S0163-6383(99)00016-8
Billard A, 2004, ROBOT AUTON SYST, V47, P69, DOI 10.1016/j.robot.2004.03.002
BORENSZTAJN G, 2008, P ANN C COGN SCI SOC
Boroditsky L, 2001, COGNITIVE PSYCHOL, V43, P1, DOI 10.1006/cogp.2001.0748
Borroni P, 2005, BRAIN RES, V1065, P115, DOI 10.1016/j.brainres.2005.10.034
BOWERMAN M, 2001, LANGUAGE ACQUISTION
BRAND RJ, 2005, 10 INT C STUD CHILD
BRAND RJ, 2007, INFANCY, V12, P321
BROZ F, 2009, P SOC LEARN INT SCEN
Bybee J, 2006, LANGUAGE, V82, P711, DOI 10.1353/lan.2006.0186
CAMAIONI L, 1991, FIRST LANG, V11, P345
Cameron-Faulkner T, 2003, COGNITIVE SCI, V27, P843, DOI
10.1016/j.cogsci.2003.06.001
Cangelosi A, 2004, BRAIN LANG, V89, P401, DOI 10.1016/S0093-934X(03)00353-5
Cangelosi A, 1998, CONNECT SCI, V10, P83, DOI 10.1080/095400998116512
CANGELOSI A, 2005, MODELING LANGUAGE CO
Cangelosi A., 2008, P 3 ACM IEEE INT C H
Cangelosi A., 2002, SIMULATING EVOLUTION
Cangelosie A, 2001, EVOLUTION COMMUNICAT, V4, P117
Cangelosi A, 2006, COGNITIVE SCI, V30, P673, DOI 10.1207/s15516709cog0000_72
Cappa SF, 2003, J NEUROLINGUIST, V16, P183, DOI 10.1016/S0911-6044(02)00013-1
Carpenter M, 1998, INFANT BEHAV DEV, V21, P315, DOI 10.1016/S0163-6383(98)90009-
1
Carpenter M., 1998, MONOGRAPHS SOC RES C, V63, P4
Cartwright TA, 1997, COGNITION, V63, P121, DOI 10.1016/S0010-0277(96)00793-7
Chakravarthy VS, 2003, PATTERN RECOGN LETT, V24, P1901, DOI 10.1016/S0167-
8655(03)00017-5
Chiel HJ, 1997, TRENDS NEUROSCI, V20, P553, DOI 10.1016/S0166-2236(97)01149-1
Choi S, 1999, COGNITIVE DEV, V14, P241, DOI 10.1016/S0885-2014(99)00004-0
CHOI S, 1988, J CHILD LANG, V15, P517, DOI 10.1017/S030500090001254X
Clark A, 2001, MIND LANG, V16, P121, DOI 10.1111/1468-0017.00162
CLARK A, 2004, P WORKSH PSYCH MOD H
Clark H. H., 1992, ARENAS LANGUAGE USE
Corballis M. C., 2002, HAND MOUTH ORIGINS L
COWLEY SJ, 2008, CONNECT SCI, V20, P349
Craighero L, 1999, J EXP PSYCHOL HUMAN, V25, P1673, DOI 10.1037/0096-
1523.25.6.1673
Csibra Gergely, 2006, ATTENTION PERFORM, VXXI, P249
Dabrowska E., 2004, LANGUAGE MIND BRAIN
DAMASIO AR, 1993, P NATL ACAD SCI USA, V90, P4957, DOI 10.1073/pnas.90.11.4957
Dausendschon-Gay Uli, 2003, EUROSLA YB, V3, P207
Deacon T. W., 1997, SYMBOLIC SPECIES COE
desLeon Lourdes, 1998, J LINGUIST ANTHROPOL, V3, P131, DOI DOI
10.1525/JLIN.1998.8.2.131
Demiris Y, 2006, ROBOT AUTON SYST, V54, P361, DOI 10.1016/j.robot.2006.02.003
DEVRIES JIP, 1982, EARLY HUM DEV, V23, P159
Di Paolo EA, 2008, NEW IDEAS PSYCHOL, V26, P278, DOI
10.1016/j.newideapsych.2007.07.006
DIAMOND A, 1981, SOC RES CHILD DEV AB, V78
Dominey PF, 2009, INT J HUM ROBOT, V6, P147, DOI 10.1142/S0219843609001711
Dominey PF, 2004, J NEUROLINGUIST, V17, P121, DOI 10.1016/S0911-6044(03)00056-3
DOMINEY PF, 2009, NEW IDEAS PSYCHOL
Elman Jeff, 2006, ENCY LANGUAGE LINGUI, V2, P726
Fadiga L, 2002, EUR J NEUROSCI, V15, P399, DOI 10.1046/j.0953-816x.2001.01874.x
Fadiga L, 2000, INT J PSYCHOPHYSIOL, V35, P165, DOI 10.1016/S0167-8760(99)00051-
3
Farroni T, 2004, J COGNITIVE NEUROSCI, V16, P1320, DOI 10.1162/0898929042304787
Fogassi L, 1996, J NEUROPHYSIOL, V76, P141
Fogassi L, 2005, SCIENCE, V308, P662, DOI 10.1126/science.1106138
FOGASSI L, 1998, SOC NEUR ABSTR, V24, P257
Fogel A, 2007, INFANT BEHAV DEV, V30, P251, DOI 10.1016/j.infbeh.2007.02.007
FORSTER F, 2009, P EUR C ART LIF BUD
Fulop S. A., 2005, Journal of Logic, Language and Information, V14, P49, DOI
10.1007/s10849-005-4509-8
Gallese V, 1996, BRAIN, V119, P593, DOI 10.1093/brain/119.2.593
GARDENFORS P, 2002, CONCEPTUAL SPACES GE
GEORGOPOULOS AP, 1982, J NEUROSCI, V2, P1527
Gertner Y., 2009, BIENN M SOC RES CHIL
Gibson J. J., 1977, PERCEIVING ACTING KN, P67
Giese MA, 2003, NAT REV NEUROSCI, V4, P179, DOI 10.1038/nrn1057
Gigliotta O, 2008, ADAPT BEHAV, V16, P148, DOI 10.1177/1059712308089184
Gilbert AL, 2006, P NATL ACAD SCI USA, V103, P489, DOI 10.1073/pnas.0509868103
Glenberg AM, 2000, J MEM LANG, V43, P379, DOI 10.1006/jmla.2000.2714
Glenberg AM, 2002, PSYCHON B REV, V9, P558, DOI 10.3758/BF03196313
Glenberg AM, 2010, WIRES COGN SCI, V1, P586, DOI 10.1002/wcs.55
Gogate LJ, 2001, INFANCY, V2, P219, DOI 10.1207/S15327078IN0202_7
GOLD EM, 1967, INFORM CONTROL, V10, P447, DOI 10.1016/S0019-9958(67)91165-5
Goldberg A., 2006, CONSTRUCTIONS WORK N
Goldberg AE, 2004, COGN LINGUIST, V15, P289, DOI 10.1515/cogl.2004.011
Goldin-Meadow S, 2003, POINTING: WHERE LANGAUAGE, CULTURE, AND COGNITON MEET,
P85
Gomez R. L., 2007, OXFORD HDB PSYCHOLIN, P601
Goodwin C, 2000, J PRAGMATICS, V32, P1489, DOI 10.1016/S0378-2166(99)00096-X
Graziano MSA, 1997, J NEUROPHYSIOL, V77, P2268
GREENFIELD PM, 1991, BEHAV BRAIN SCI, V14, P531, DOI 10.1017/S0140525X00071235
Gumperz John J., 1997, CURR ANTHROPOL, V32, P613
HARNAD S, 1990, PHYSICA D, V42, P335, DOI 10.1016/0167-2789(90)90087-6
Haruno M, 2001, NEURAL COMPUT, V13, P2201, DOI 10.1162/089976601750541778
Hauk O, 2004, NEURON, V41, P301, DOI 10.1016/S0896-6273(03)00838-9
Hayashi M, 2003, ANIM COGN, V6, P225, DOI 10.1007/s10071-003-0185-8
HINTON GE, 2006, ADV NEURAL INFORM PR, V18
HINTON GE, NEURAL COMPUT, V18, P1527
Hirsh-Pasek Kathy, 1996, ORIGINS GRAMMAR EVID
HIRSHPASEK K, 2006, ORIGINS GRAMMAR EVID
Horwitz B, 1999, TRENDS COGN SCI, V3, P91, DOI 10.1016/S1364-6613(99)01282-6
HUBEL DH, 1962, J PHYSIOL-LONDON, V160, P106, DOI 10.1113/jphysiol.1962.sp006837
Ito M, 2006, NEURAL NETWORKS, V19, P323, DOI 10.1016/j.neunet.2006.02.007
Jeannerod M, 1997, COGNITIVE NEUROSCIEN
JHUANG H, 2007, P 11 IEEE INT C COMP
JOHNSON MH, 1997, DEV COGNITIVE NEUROS
Just MA, 1999, HUM BRAIN MAPP, V8, P128, DOI 10.1002/(SICI)1097-
0193(1999)8:2/3<128::AID-HBM10>3.0.CO;2-G
Kaplan F, 2008, INFANT CHILD DEV, V17, P55, DOI 10.1002/icd.544
KARMILOFF K, 2001, PATHWAYS LANGUAGE FO
Keijzer F., 2001, REPRESENTATION BEHAV
Kempson R., 2001, DYNAMIC SYNTAX FLOW
KINDERMANN TA, 1993, INT J BEHAV DEV, V16, P513, DOI 10.1177/016502549301600401
KOSEBAGCI H, 2010, P AAAI SPRING S 2010
KOSEBAGCI H, CONNECT SCI IN PRESS
Kousta S., 2009, P 31 M COGN SCI SOC
KRUNIC V, 2009, P INT C ROB AUT KOBC
Kuntay Aylin, 1996, SOCIAL INTERACTION S, P265
Lackner JR, 1998, BRAIN RES REV, V28, P194, DOI 10.1016/S0165-0173(98)00039-3
Lakoff G., 1987, WOMEN FIRE DANGEROUS
LANGACKER R. W, 1987, FDN COGNITIVE GRAMMA
LeCun Y., 1990, P INT C PATT REC, V2, P35
LIBERMAN AM, 1985, COGNITION, V21, P1, DOI 10.1016/0010-0277(85)90021-6
Liszkowski U., 2005, GESTURE, V5, P135, DOI [10.1075/gest.5.1-2.11lis, DOI
10.1075/GEST.5.1-2.11LIS]
Liszkowski U., 2006, ROOTS HUMAN SOCIALIT, P153
Locke JL, 2007, INTERACT STUD, V8, P159, DOI 10.1075/is.8.1.11loc
Lopes LS, 2007, INTERACT STUD, V8, P53, DOI 10.1075/is.8.1.05lop
Lungarella M, 2003, CONNECT SCI, V15, P151, DOI 10.1080/09540090310001655110
Lupyan G, 2007, PSYCHOL SCI, V18, P1077, DOI 10.1111/j.1467-9280.2007.02028.x
Macwhinney B, 2005, CONNECT SCI, V17, P191, DOI 10.1080/09540090500177687
Majid A, 2004, TRENDS COGN SCI, V8, P108, DOI 10.1016/j.tics.2004.01.003
Mareschal D., 2007, NEUROCONSTRUCTIVISM, VII
Mareschal D., 2007, NEUROCONSTRUCTIVISM, VI-II
Markman E., 1989, CATEGORIZATION NAMIN
Markova G, 2006, DEV PSYCHOL, V42, P132, DOI 10.1037/0012-1649.42.1.132
MARTIN A, 1995, SCIENCE, V270, P102, DOI 10.1126/science.270.5233.102
MCCLURE C, 2006, J CHILD LANGUAGE, V0033
MERVIS CB, 1994, CHILD DEV, V65, P1646, DOI 10.2307/1131285
Metta G, 2003, ADAPT BEHAV, V11, P109, DOI 10.1177/10597123030112004
Metta G, 2006, INTERACT STUD, V7, P197, DOI 10.1075/is.7.2.06met
Meyers E, 2008, INT J COMPUT VISION, V76, P93, DOI 10.1007/s11263-007-0058-8
Millikan RG, 2004, JEAN NICOD LECT, P1
MIROLLI M, NEW IDEAS P IN PRESS
Mnih A., 2009, ADV NEURAL INFORM PR, V21
Moore C, 1997, INFANT BEHAV DEV, V20, P83, DOI 10.1016/S0163-6383(97)90063-1
Morris WC, 2000, LECT NOTES ARTIF INT, V1778, P175
MOVELLAN JR, 2005, P 4 IEEE INT C DEV L, P19
Muir D, 2003, INFANCY, V4, P483, DOI 10.1207/S15327078IN0404_03
Murata A, 1997, J NEUROPHYSIOL, V78, P2226
Mussa-Ivaldi FA, 2000, PHILOS T R SOC B, V355, P1755, DOI 10.1098/rstb.2000.0733
MUSSAIVALDI FA, 1992, BIOL CYBERN, V67, P491, DOI 10.1007/BF00198756
Nakayama Y, 2008, J NEUROSCI, V28, P10287, DOI 10.1523/JNEUROSCI.2372-08.2008
NCHANIV CL, 2007, IMITATION SOCIAL LEA
Nehaniv C.L., 2000, INTERDISCIPLINARY AP
Nehaniv C. L., 2007, EMERGENCE COMMUNICAT, P1
Ninio A, 2005, COGN LINGUIST, V16, P531, DOI 10.1515/cogl.2005.16.3.531
Ninio A, 2005, J CHILD LANG, V32, P35, DOI 10.1017/S0305000904006713
NOLFI S, 1999, CONNECT SCI, V11, P129
Nolfi S, 2000, EVOLUTIONARY ROBOTIC
Nolfi S., 2005, HDB CATEGORIZATION C, P869
Nolfi S., 2002, ANIMALS ANIMATS, V7, P266
NORI F, 2009, P 9 INT C EP ROB MOD
Okanda M, 2006, P 15 BIENN INT C INF
Oudeyer PY, 2007, IEEE T EVOLUT COMPUT, V11, P265, DOI 10.1109/TEVC.2006.890271
Oudeyer PY, 2006, CONNECT SCI, V18, P189, DOI 10.1080/09540090600768567
Oudeyer Pierre-Yves, 2006, SELF ORG EVOLUTION S
Oztop E, 2006, NEURAL NETWORKS, V19, P254, DOI 10.1016/j.neunet.2006.02.002
Pecher Diane, 2005, GROUNDING COGNITION
Perani D, 1999, BRAIN, V122, P2337, DOI 10.1093/brain/122.12.2337
Pfeifer R., 1999, UNDERSTANDING INTELL
Piaget J, 1954, CONSTRUCTION REALITY
PINC JM, 1994, INPUT INTERACTION LA, P15
Pinker S., 2007, STUFF THOUGHT LANGUA
Pinker S, 1984, LANGUAGE LEARNABILIT
Plunkett K, 2008, COGNITION, V106, P665, DOI 10.1016/j.cognition.2007.04.003
Ponce J., 2006, CATEGORY LEVEL OBJEC
PREISSL H, 1995, NEUROSCI LETT, V197, P81, DOI 10.1016/0304-3940(95)11892-Z
PRINCE CG, 2004, P 4 INT WORKSH EP RO, P89
Pulvermuller F, 2000, BIOL PSYCHOL, V53, P177, DOI 10.1016/S0301-0511(00)00046-6
PULVERMULLER F, 2003, PSYCHOPHYSIOLOGY, V40, P70
PULVERMULLER F., 2003, NEUROSCIENCE LANGUAG
Quartz SR, 1997, BEHAV BRAIN SCI, V20, P537
Quine W.V, 1960, WORD OBJECT
Rakison D. H., 2003, EARLY CATEGORY CONCE
Rakoczy H, 2008, DEV PSYCHOL, V44, P875, DOI 10.1037/0012-1649.44.3.875
Rakoczy H, 2007, NEW DIR CHILD ADOLES, V115, P53, DOI 10.1002/cad.182
REGAN OJK, 2001, BEHAV BRAIN SCI, V5
Rizzolatti G, 1997, SCIENCE, V277, P190, DOI 10.1126/science.277.5323.190
Rizzolatti G, 2001, NEURON, V31, P889, DOI 10.1016/S0896-6273(01)00423-8
RIZZOLATTI G, 1988, EXP BRAIN RES, V71, P491, DOI 10.1007/BF00248742
Rizzolatti G, 1998, TRENDS NEUROSCI, V21, P188, DOI 10.1016/S0166-2236(98)01260-
0
Rizzolatti G, 1997, CURR OPIN NEUROBIOL, V7, P562, DOI 10.1016/S0959-
4388(97)80037-2
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
Roberson D, 2005, COGNITIVE PSYCHOL, V50, P378, DOI
10.1016/j.cogpsych.2004.10.001
Rohlfing KJ, 2006, ADV ROBOTICS, V20, P1183, DOI 10.1163/156855306778522532
Rolf M, 2009, IEEE T AUTON MENT DE, V1, P55, DOI 10.1109/TAMD.2009.2021091
Roy AC, 2008, J PHYSIOLOGY-PARIS, V102, P101, DOI
10.1016/j.jphysparis.2008.03.006
Roy D, 2005, ARTIF INTELL, V167, P170, DOI 10.1016/j.artint.2005.04.007
Roy D, 2005, TRENDS COGN SCI, V9, P389, DOI 10.1016/j.tics.2005.06.013
Roy D, 2004, IEEE T SYST MAN CY B, V34, P1374, DOI 10.1109/TSMCB.2004.823327
SAKATA H, 1995, CEREB CORTEX, V5, P429, DOI 10.1093/cercor/5.5.429
SATO Y, 2010, EVOLANG UTR HOLL APR
Saunders J., 2007, INT J ADV ROBOT SYST, V4, P109
Saunders J., 2009, P 2 INT IEEE S ART L
Saunders J., 2010, P 2 INT S NEW FRONT
Saunders J, 2007, INTERACT STUD, V8, P307, DOI 10.1075/is.8.2.07sau
Saur D, 2008, P NATL ACAD SCI USA, V105, P18035, DOI 10.1073/pnas.0805234105
SCHAAL S, 2000, P INT C ROB AUT SAN
Schegloff EA, 2007, SEQUENCE ORGANIZATION IN INTERACTION: A PRIMER IN
CONVERSATION ANALYSIS I, P1, DOI 10.2277/ 0521532795
Scheier C, 1998, NEURAL NETWORKS, V11, P1551, DOI 10.1016/S0893-6080(98)00084-7
Serre T, 2007, IEEE T PATTERN ANAL, V29, P411, DOI 10.1109/TPAMI.2007.56
Simmons G, 2006, BIOL CYBERN, V94, P393, DOI 10.1007/s00422-006-0053-0
Sirois S, 2008, BEHAV BRAIN SCI, V31, P321, DOI 10.1017/S0140525X0800407X
Smith K, 2003, ADV COMPLEX SYST, V6, P537, DOI 10.1142/S0219525903001055
Snow C. E., 1994, INPUT INTERACTION LA, P3, DOI DOI 10.1017/CBO9780511620690.002
Solan Z, 2005, P NATL ACAD SCI USA, V102, P11629, DOI 10.1073/pnas.0409746102
Spelke ES, 2000, AM PSYCHOL, V55, P1233, DOI 10.1037/0003-066X.55.11.1233
Sperber Dan, 1995, RELEVANCE COMMUNICAT
SPORNS O, 2007, NEUROCONSTRUCTIVISM, P179
Steels L, 2005, CONNECT SCI, V17, P213, DOI 10.1080/09540090500269088
Steels L, 2005, BEHAV BRAIN SCI, V28, P469
Steels L, 1998, ARTIF INTELL, V103, P133, DOI 10.1016/S0004-3702(98)00066-6
Steels Luc, 2005, ROLE CONSTRUCT UNPUB
Steels L, 2007, EMERGENCE COMMUNICAT, P129
STEELS L, 2006, LECT NOTES COMPUT SC, P197
Steels L, 2004, P ANN M ASS COMP LIN, P9
Steels L., 2001, EVOLUTION COMMUNICAT, V4, P3
Steels L, 2006, TRENDS COGN SCI, V10, P347, DOI 10.1016/j.tics.2006.06.002
Striano T, 2005, DEVELOPMENTAL SCI, V8, P509, DOI 10.1111/j.1467-
7687.2005.00442.x
Sugita Y, 2005, ADAPT BEHAV, V13, P33, DOI 10.1177/105971230501300102
Swingley D, 2009, PHILOS T R SOC B, V364, P3617, DOI 10.1098/rstb.2009.0107
Tanaka F, 2007, P NATL ACAD SCI USA, V104, P17954, DOI 10.1073/pnas.0707769104
TANAKA M, 1982, DEV DIAGN HUM INFANT
Tani J, 1999, NEURAL NETWORKS, V12, P1131, DOI 10.1016/S0893-6080(99)00060-X
TELLIER I, 1999, P 12 AMST C AMST NET, P217
Thelen E., 1994, DYNAMIC SYSTEMS APPR
Tomasello M, 2000, TRENDS COGN SCI, V4, P156, DOI 10.1016/S1364-6613(00)01462-5
Tomasello M, 2005, BEHAV BRAIN SCI, V28, P675, DOI 10.1017/S0140525X05000129
Tomasello M, 2001, BEHAV BRAIN SCI, V24, P1119
Tomasello M, 2002, COGNITION, V83, P207, DOI 10.1016/S0010-0277(01)00172-X
Tomasello M., 1995, JOINT ATTENTION ITS
Tomasello M., 1988, LANG SCI, V10, P69
Tomasello M, 1999, CULTURAL ORIGINS HUM
Tomasello M., 1992, 1 VERBS CASE STUDY E
Tomasello M., 2003, CONSTRUCTING LANGUAG
Tomasello M, 2007, CHILD DEV, V78, P705, DOI 10.1111/j.1467-8624.2007.01025.x
Trevarthen C., 2000, MUSIC SCI, V1999-2000, P155
TREVARTHEN C, 1979, BEFORE SPEECH BEGINI
TRONICK E, 1978, J AM ACAD CHILD PSY, V17, P1, DOI 10.1016/S0002-7138(09)62273-1
Vihman M. M., 2000, EVOLUTIONARY EMERGEN
Vogt P, 2003, ROBOT AUTON SYST, V43, P109, DOI 10.1016/S0921-8890(02)00353-6
von Hofsten C, 2004, TRENDS COGN SCI, V8, P266, DOI 10.1016/j.tics.2004.04.002
VONDERMALSBURG C, 1988, NEUROBIOLOGY NEOCORT, P69
Waxman SR, 1995, COGNITIVE PSYCHOL, V29, P257, DOI 10.1006/cogp.1995.1016
Waxman SR, 1999, COGNITION, V70, pB35, DOI 10.1016/S0010-0277(99)00017-7
Weng J., 2004, INT J HUMANOID ROBOT, V1
Weng JY, 2007, NEUROCOMPUTING, V70, P2303, DOI 10.1016/j.neucom.2006.07.017
Weng JY, 2006, IEEE COMPUT INTELL M, V1, P15, DOI 10.1109/MCI.2006.1672985
Weng JY, 2001, SCIENCE, V291, P599, DOI 10.1126/science.291.5504.599
Westermann G, 2007, DEVELOPMENTAL SCI, V10, P75, DOI 10.1111/j.1467-
7687.2007.00567.x
Westermann G, 2006, TRENDS COGN SCI, V10, P227, DOI 10.1016/j.tics.2006.03.009
Wiemer-Hastings K, 2005, COGNITIVE SCI, V29, P719, DOI
10.1207/s15516709cog0000_33
Winawer J, 2007, P NATL ACAD SCI USA, V104, P7780, DOI 10.1073/pnas.0701644104
Wittgenstein Ludwig, 1968, PHILOS INVESTIGATION
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
WOOD D, 1976, J CHILD PSYCHOL PSYC, V17, P89, DOI 10.1111/j.1469-
7610.1976.tb00381.x
Woodward AL, 1998, COGNITION, V69, P1, DOI 10.1016/S0010-0277(98)00058-4
Wray A, 1998, LANG COMMUN, V18, P47, DOI 10.1016/S0271-5309(97)00033-5
WREDE B, J PRAGMATIC IN PRESS
Wrede B, 2009, LECT NOTES ARTIF INT, V5436, P139, DOI 10.1007/978-3-642-00616-
6_8
Xu F, 2002, COGNITION, V85, P223, DOI 10.1016/S0010-0277(02)00109-9
Yamashita Y., 2008, PLOS COMPUTATIONAL B, V4, P11
Yoshida H, 2005, PSYCHOL SCI, V16, P90, DOI 10.1111/j.0956-7976.2005.00787.x
ZESCHEL A, 2007, THESIS U BREMEN BREM
ZESCHEL A, 2009, DELIVERABLE 3 1 ITAL
Zhang YL, 2007, IEEE T EVOLUT COMPUT, V11, P226, DOI 10.1109/TEVC.2006.890269
Zukow-Goldring P, 2006, ACTION TO LANGUAGE VIA THE MIRROR NEURON SYSTEM, P469,
DOI 10.1017/CBO9780511541599.015
NR 280
TC 66
Z9 66
U1 4
U2 27
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1943-0604
J9 IEEE T AUTON MENT DE
JI IEEE Trans. Auton. Ment. Dev.
PD SEP
PY 2010
VL 2
IS 3
BP 167
EP 195
DI 10.1109/TAMD.2010.2053034
PG 29
WC Computer Science, Artificial Intelligence; Robotics; Neurosciences
SC Computer Science; Robotics; Neurosciences & Neurology
GA 820JQ
UT WOS:000294902200002
DA 2018-01-22
ER

PT J
AU Foissotte, T
Stasse, O
Wieber, PB
Escande, A
Kheddar, A
AF Foissotte, Torea
Stasse, Olivier
Wieber, Pierre-Brice
Escande, Adrien
Kheddar, Abderrahmane
TI AUTONOMOUS 3D OBJECT MODELING BY A HUMANOID USING AN OPTIMIZATION-DRIVEN
NEXT-BEST-VIEW FORMULATION
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Object modeling; next-best-view; optimization; NEWUOA; humanoid; posture
generation
ID RECONSTRUCTION; FEATURES; ROBOT
AB An original method to build a visual model for unknown objects by a humanoid
robot is proposed. The algorithm ensures successful autonomous realization of this
goal by addressing the problem as an active coupling between computer vision and
whole-body posture generation. The visual model is built through the repeated
execution of two processes. The first one considers the current knowledge about the
visual aspects and the shape of the object to deduce a preferred viewpoint with the
aim of reducing the uncertainty of the shape and appearance of the object. This is
done while considering the constraints related to the embodiment of the vision
sensors in the humanoid head. The second process generates a whole robot posture
using the desired head pose while solving additional constraints such as collision
avoidance and joint limitations. The main contribution of our approach relies on
the use of different optimization algorithms to find an optimal viewpoint by
including the humanoid specificities in terms of constraints, an embedded vision
sensor, and redundant motion capabilities. This approach differs significantly from
those of traditional works addressing the problem of autonomously building an
object model.
C1 [Foissotte, Torea; Stasse, Olivier; Kheddar, Abderrahmane] CNRS, LIRMM, F-75700
Paris, France.
[Foissotte, Torea; Stasse, Olivier; Kheddar, Abderrahmane] CNRS AIST, JRL, CRT,
UMI 3218, Tokyo, Japan.
[Wieber, Pierre-Brice] INRIA, BIPOP Team, Rhone Alpes, France.
[Escande, Adrien] CEA, LIST, Fontenay Aux Roses, France.
RP Foissotte, T (reprint author), CNRS, LIRMM, F-75700 Paris, France.
FU EU [34002]
FX This work is partially supported by grants from the ROBOT@CWE EU CEC
project, Contract No. 34002 under the 6th Research program
(www.robot-at-cwe.eu).
CR Banta JE, 2000, IEEE T SYST MAN CY A, V30, P589, DOI 10.1109/3468.867866
Bay H, 2008, COMPUT VIS IMAGE UND, V110, P346, DOI 10.1016/j.cviu.2007.09.014
CONNOLLY C, 1985, IEEE INT C ROB AUT, P432
Escande A., 2006, IEEE RSJ INT C ROB I, P2974
Escande A, 2007, IEEE-RAS INT C HUMAN, P188, DOI 10.1109/ICHR.2007.4813867
Evrard P, 2008, 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, VOLS 1 AND 2, P15, DOI 10.1109/ROMAN.2008.4600636
Foissotte T., 2008, IEEE RAS RSJ INT C H, P333
HERTZMANN A, 1999, SIGGRAPH 99 COURSE N
Laumond JP, 2006, IEEE ROBOT AUTOM MAG, V13, P90, DOI 10.1109/MRA.2006.1638020
Lowe DG, 2004, INT J COMPUT VISION, V60, P91, DOI
10.1023/B:VISI.0000029664.99615.94
Lowe DG, 2001, PROC CVPR IEEE, P682
MAVER J, 1993, IEEE T PATTERN ANAL, V15, P417, DOI 10.1109/34.211463
Meger D, 2008, ROBOT AUTON SYST, V56, P503, DOI 10.1016/j.robot.2008.03.008
Morel JM, 2009, SIAM J IMAGING SCI, V2, P438, DOI 10.1137/080732730
Pito R, 1999, IEEE T PATTERN ANAL, V21, P1016, DOI 10.1109/34.799908
POWELL MJD, 2004, 2004NA05 DAMTP U CAM
Saidi F., 2007, IEEE RSJ INT C INT R, P1677
Sanchiz J. M., 1999, BMVC99. Proceedings of the 10th British Machine Vision
Conference, P163
Scott WR, 2003, ACM COMPUT SURV, V35, P64, DOI 10.1145/641865.641868
Stasse O, 2007, IEEE-RAS INT C HUMAN, P151, DOI 10.1109/ICHR.2007.4813862
Stasse O., 2008, IEEE RAS RSJ INT C H
TARABANIS KA, 1995, IEEE T ROBOTIC AUTOM, V11, P86, DOI 10.1109/70.345940
Yamazaki K, 2004, IEEE INT CONF ROBOT, P1399, DOI 10.1109/ROBOT.2004.1308020
NR 23
TC 3
Z9 3
U1 0
U2 5
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD SEP
PY 2010
VL 7
IS 3
BP 407
EP 428
DI 10.1142/S0219843610002246
PG 22
WC Robotics
SC Robotics
GA 680KG
UT WOS:000284230200005
DA 2018-01-22
ER

PT J
AU Pretto, A
Menegatti, E
Jitsukawa, Y
Ueda, R
Arai, T
AF Pretto, Alberto
Menegatti, Emanuele
Jitsukawa, Yoshiaki
Ueda, Ryuichi
Arai, Tamio
TI Image similarity based on Discrete Wavelet Transform for robots with
low-computational resources
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE Image similarity; Localization; Topological SLAM
ID LOCALIZATION
AB This paper describes a similarity measure for images which can be used in image-
based topological localization and topological SLAM problems by autonomous robots
with low computational resources. Instead of storing the images in the robot's
memory, we propose a compact signature to be extracted from the images. The
signature is based on the calculation of the 2D Haar Wavelet Transform of the gray-
level image and its size is only 170 bytes. We called this signature the DWT-
signature. We exploit the frequency and space localization property of the wavelet
transform to match the images grabbed by the perspective camera mounted on board
the robot and the reference panoramic images built using an automatic image
stitching procedure. The proposed signature allows, at the same time, memory saving
and fast and efficient similarity calculation. For the topological SLAM problem we
also present a simple implementation of a loop-closure detection based on the
proposed signature.
We report experiments showing the effectiveness of the proposed image similarity
measure using two kinds of small robots: an AIBO ERS-7 robot of the RoboCup Araibo
Team of the University of Tokyo and a Kondo KHR-1HV humanoid robot of the IAS-Lab
of the University of Padua. (C) 2010 Elsevier B.V. All rights reserved.
C1 [Pretto, Alberto; Menegatti, Emanuele] Univ Padua, Fac Engn, Dept Informat Engn,
Intelligent Autonomous Syst Lab, I-35131 Padua, Italy.
[Jitsukawa, Yoshiaki; Ueda, Ryuichi; Arai, Tamio] Univ Tokyo, Dept Precis Engn,
Tokyo, Japan.
RP Pretto, A (reprint author), Univ Padua, Fac Engn, Dept Informat Engn,
Intelligent Autonomous Syst Lab, Via Gradenigo 6-A, I-35131 Padua, Italy.
EM alberto.pretto@dei.unipd.it
OI Menegatti, Emanuele/0000-0001-5794-9979; PRETTO,
ALBERTO/0000-0003-1920-2887
CR AIHARA H, 1998, P 14 INT C PATT REC, V1, P1799
Angeli A., 2008, P IEEE RSJ INT C INT, P1031
Booij O., 2007, IEEE INT C ROB AUT
CASSINIS R, 2002, ROBOTICS AUTONOMOUS, V40
CROW FC, 1984, SIGGRAPH 84, P207
Do MN, 2002, IEEE T IMAGE PROCESS, V11, P146, DOI 10.1109/83.982822
DUDEK G, 2000, ROB AUT 2000 P ICRA, V2
Filliat D., 2007, IEEE INT C ROB AUT
Frontoni E, 2006, ROBOT AUTON SYST, V54, P750, DOI 10.1016/j.robot.2006.04.014
Gaspar J., 2000, IEEE T ROBOTICS AUTO, V16
Gross HM, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1505
Jacobs C. E., 1995, P SIGGRAPH 95 LOS AN
JOGAN M, 2000, P 15 INT C PATT REC, V4, P136
Krose BJA, 2001, IMAGE VISION COMPUT, V19, P381, DOI 10.1016/S0262-
8856(00)00086-X
Lowe DG, 2004, INT J COMPUT VISION, V60, P91, DOI
10.1023/B:VISI.0000029664.99615.94
Menegatti E, 2004, ROBOT AUTON SYST, V47, P251, DOI 10.1016/j.robot.2004.03.014
Natsev A, 1999, SIGMOD RECORD, VOL 28, NO 2 - JUNE 1999, P395
NAYAR SK, 1994, ROB AUT 1994 P 1994, P3237
Newman P. M., 2006, P IEEE INT C ROB AUT
Nilsback M., 2006, IEEE C COMP VIS PATT
Ranganathan A, 2006, IEEE T ROBOT, V22, P92, DOI 10.1109/TRO.2005.861457
SIVIC J, 2003, P INT C COMP VIS NIC
Vetterli M, 1995, SIGNAL PROCESSING SE
Wolf J, 2005, IEEE T ROBOT, V21, P208, DOI 10.1109/TRO.2004.835453
NR 24
TC 4
Z9 4
U1 1
U2 4
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD JUL 31
PY 2010
VL 58
IS 7
SI SI
BP 879
EP 888
DI 10.1016/j.robot.2010.03.009
PG 10
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 621JQ
UT WOS:000279573100009
DA 2018-01-22
ER

PT J
AU Chaminade, T
Zecca, M
Blakemore, SJ
Takanishi, A
Frith, CD
Micera, S
Dario, P
Rizzolatti, G
Gallese, V
Umilta, MA
AF Chaminade, Thierry
Zecca, Massimiliano
Blakemore, Sarah-Jayne
Takanishi, Atsuo
Frith, Chris D.
Micera, Silvestro
Dario, Paolo
Rizzolatti, Giacomo
Gallese, Vittorio
Umilta, Maria Alessandra
TI Brain Response to a Humanoid Robot in Areas Implicated in the Perception
of Human Emotional Gestures
SO PLOS ONE
LA English
DT Article
ID PROBABILISTIC CYTOARCHITECTONIC MAPS; HUMAN PREMOTOR CORTEX;
MIRROR-NEURON SYSTEM; FUSIFORM FACE AREA; BIOLOGICAL MOTION; ACTION
REPRESENTATION; SPEECH-PERCEPTION; RECOGNITION; MOTOR; IMITATION
AB Background: The humanoid robot WE4-RII was designed to express human emotions in
order to improve human-robot interaction. We can read the emotions depicted in its
gestures, yet might utilize different neural processes than those used for reading
the emotions in human agents.
Methodology: Here, fMRI was used to assess how brain areas activated by the
perception of human basic emotions (facial expression of Anger, Joy, Disgust) and
silent speech respond to a humanoid robot impersonating the same emotions, while
participants were instructed to attend either to the emotion or to the motion
depicted.
Principal Findings: Increased responses to robot compared to human stimuli in
the occipital and posterior temporal cortices suggest additional visual processing
when perceiving a mechanical anthropomorphic agent. In contrast, activity in
cortical areas endowed with mirror properties, like left Broca's area for the
perception of speech, and in the processing of emotions like the left anterior
insula for the perception of disgust and the orbitofrontal cortex for the
perception of anger, is reduced for robot stimuli, suggesting lesser resonance with
the mechanical agent. Finally, instructions to explicitly attend to the emotion
significantly increased response to robot, but not human facial expressions in the
anterior part of the left inferior frontal gyrus, a neural marker of motor
resonance.
Conclusions: Motor resonance towards a humanoid robot, but not a human, display
of facial emotion is increased when attention is directed towards judging emotions.
Significance: Artificial agents can be used to assess how factors like
anthropomorphism affect neural response to the perception of human actions.
C1 [Chaminade, Thierry; Frith, Chris D.] UCL, Wellcome Trust Ctr Neuroimaging,
London, England.
[Chaminade, Thierry] Aix Marseille Univ, CNRS, Mediterranean Inst Cognit
Neurosci INCM, Marseille, France.
[Zecca, Massimiliano; Takanishi, Atsuo] Waseda Univ, Consolidated Res Inst Adv
Sci & Med Care ASMeW, Inst Biomed Engn, Tokyo, Japan.
[Zecca, Massimiliano; Takanishi, Atsuo] Waseda Univ, HRI, Tokyo, Japan.
[Zecca, Massimiliano; Takanishi, Atsuo; Micera, Silvestro; Dario, Paolo] Italy
Japan Joint Lab Humanoid & Personal Robot R, Tokyo, Japan.
[Blakemore, Sarah-Jayne] UCL, Inst Cognit Neurosci, London, England.
[Takanishi, Atsuo] Waseda Univ, Dept Modern Mech Engn, Tokyo, Japan.
[Frith, Chris D.] Aarhus Univ Hosp, CFIN, Aarhus, Denmark.
[Micera, Silvestro; Dario, Paolo] Scuola Super Sant Anna, ARTS Lab, Pisa, Italy.
[Micera, Silvestro] Swiss Fed Inst Technol Zurich ETHZ, Inst Automat,
Neuroprosthesis Control Grp, Zurich, Switzerland.
[Rizzolatti, Giacomo; Gallese, Vittorio; Umilta, Maria Alessandra] Univ Parma,
Dipartimento Neurosci, Sez Fisiol, I-43100 Parma, Italy.
[Rizzolatti, Giacomo; Gallese, Vittorio; Umilta, Maria Alessandra] Italian Inst
Technol, Brain Ctr Social & Motor Cognit, Parma, Italy.
RP Chaminade, T (reprint author), UCL, Wellcome Trust Ctr Neuroimaging, London,
England.
EM tchamina@gmail.com
RI Chaminade, Thierry/F-8367-2010; Frith, Chris/A-2171-2009
OI Chaminade, Thierry/0000-0003-4952-1467; Frith,
Chris/0000-0002-8665-0690; Zecca, Massimiliano/0000-0003-4741-4334;
Umilta', Maria Alessandra/0000-0003-0180-6818; Lauwereyns,
Jan/0000-0003-0551-2550; Blakemore, Sarah-Jayne/0000-0002-1690-2805
FU Wellcome Trust; ANR [ANR-09-BLAN-0405-02]; Royal Society; Danish
National Research Foundation; European Union; ASMeW [11]; JSPS
[19700389]; Gifu Prefecture; Waseda University [266740]
FX TC was supported by a Wellcome Trust post-doctoral fellowship and an ANR
grant (SCAD # ANR-09-BLAN-0405-02). SJB is supported by a Royal Society
Research Fellowship. CDF is supported by the Wellcome Trust and the
Danish National Research Foundation. PD, VG, SM, GR, MAU, and MZ were
supported by the European Union grant NEUROBOTICS. VG was also supported
by the EU grants NESTCOM and DISCOS. MZ was partially supported by the
ASMeW Priority Research C Grant # 11, and by the JSPS Grant-in-aid for
Scientific Research # 19700389. Partial support was provided by a
Grant-in-Aid for the WABOT-HOUSE Project by Gifu Prefecture and by
Waseda University Grant for Special Research Projects (No. 266740). The
funders had no role in study design, data collection and analysis,
decision to publish, or preparation of the manuscript.
CR Allison T, 2000, TRENDS COGN SCI, V4, P267, DOI 10.1016/S1364-6613(00)01501-1
Amunts K, 1999, J COMP NEUROL, V412, P319, DOI 10.1002/(SICI)1096-
9861(19990920)412:2<319::AID-CNE10>3.0.CO;2-7
Anderson SW, 1999, NAT NEUROSCI, V2, P1032
Andersson JLR, 2001, NEUROIMAGE, V13, P903, DOI 10.1006/nimg.2001.0746
Arbib MA, 2004, TRENDS COGN SCI, V8, P554, DOI 10.1016/j.tics.2004.10.004
Barnikol UB, 2006, NEUROIMAGE, V31, P86, DOI 10.1016/j.neuroimage.2005.11.045
Blair RJR, 2000, BRAIN, V123, P1122, DOI 10.1093/brain/123.6.1122
Blair RJR, 1999, BRAIN, V122, P883, DOI 10.1093/brain/122.5.883
Blakemore SJ, 2005, BRAIN, V128, P1571, DOI 10.1093/brain/awh500
Buccino G, 2004, J COGNITIVE NEUROSCI, V16, P114, DOI 10.1162/089892904322755601
Buccino G, 2001, EUR J NEUROSCI, V13, P400, DOI 10.1046/j.1460-9568.2001.01385.x
Carr L, 2003, P NATL ACAD SCI USA, V100, P5497, DOI 10.1073/pnas.0935845100
Caspers S, 2008, BRAIN STRUCT FUNCT, V212, P481, DOI 10.1007/s00429-008-0195-z
Castelli F, 2000, NEUROIMAGE, V12, P314, DOI 10.1006/nimg.2000.0612
Chaminade T, 2001, BEHAV BRAIN SCI, V24, P879
CHAMINADE T, 2006, ACQUIRING PROBING SE
Chaminade T, 2007, SOC COGN AFFECT NEUR, V2, P206, DOI 10.1093/scan/nsm017
Chao LL, 1999, NEUROREPORT, V10, P2945, DOI 10.1097/00001756-199909290-00013
Cohen L, 2008, NEUROIMAGE, V40, P353, DOI 10.1016/j.neuroimage.2007.11.036
de Gelder B, 2004, P NATL ACAD SCI USA, V101, P16701, DOI
10.1073/pnas.0407042101
DiSalvo C., 2002, P C DES INT SYST PRO
Drevets WC, 2001, BIOL PSYCHIAT, V49, P81, DOI 10.1016/S0006-3223(00)01038-6
DUCHAINE B, 2008, SENSES COMPREHENSIVE, P329
Duvernoy H., 1999, HUMAN BRAIN SURFACE
Ebisch SJH, 2008, J COGNITIVE NEUROSCI, V20, P1611, DOI 10.1162/jocn.2008.20111
Eickhoff SB, 2005, NEUROIMAGE, V25, P1325, DOI 10.1016/j.neuroimage.2004.12.034
Ekman P., 1978, FACIAL ACTION CODING
Epley N, 2007, PSYCHOL REV, V114, P864, DOI 10.1037/0033-295X.114.4.864
Farah MJ, 1998, PSYCHOL REV, V105, P482, DOI 10.1037/0033-295X.105.3.482
Friston KJ, 1994, HUMAN BRAIN MAPPING, V2, P189, DOI DOI 10.1002/HBM.460020402
Friston K. J., 2007, STAT PARAMETRIC MAPP
Frith CD, 2008, NEURON, V60, P503, DOI 10.1016/j.neuron.2008.10.032
Fujita M, 2004, P IEEE, V92, P1804, DOI 10.1109/JPROC.2004.835364
Gallese V, 1996, BRAIN, V119, P593, DOI 10.1093/brain/119.2.593
Gallese V, 2003, PHILOS T R SOC B, V358, P517, DOI 10.1098/rstb.2002.1234
Gallese V, 2009, DEV PSYCHOL, V45, P103, DOI 10.1037/a0014436
Gates Bill, 2007, SCI AM, V1, P58
Gazzola V, 2007, NEUROIMAGE, V35, P1674, DOI 10.1016/j.neuroimage.2007.02.003
Goldstein RZ, 2001, NEUROREPORT, V12, P2595, DOI 10.1097/00001756-200108080-
00060
Grossman E, 2000, J COGNITIVE NEUROSCI, V12, P711, DOI 10.1162/089892900562417
Grossman ED, 2002, NEURON, V35, P1167, DOI 10.1016/S0896-6273(02)00897-8
Hagoort P, 2005, TRENDS COGN SCI, V9, P416, DOI 10.1016/j.tics.2006.07.004
HIGUCHI S, 2009, NEUROREPORT
HIRAI K, 1998, DEV HONDA HUMANOID R, P1321
Hutton C, 2002, NEUROIMAGE, V16, P217, DOI 10.1006/nimg.2001.1054
Iacoboni M, 1999, SCIENCE, V286, P2526, DOI 10.1126/science.286.5449.2526
Ishiguro H, 2007, J ARTIF ORGANS, V10, P133, DOI 10.1007/s10047-007-0381-4
ITOH K, 2005, BEHAV MODEL HUMANOID, P220
ITOH K, 2004, VARIOUS EMOTIONAL EX, P35
James W., 1890, PRINCIPLES PSYCHOL
JOVICICH J, 2000, HUMAN PERCEPTION FAC
KANEKO K, 2004, HUMANOID ROBOT HRP, V2, P1083
Kanwisher N, 2000, NAT NEUROSCI, V3, P759, DOI 10.1038/77664
Kanwisher N, 1997, J NEUROSCI, V17, P4302
Keysers C, 2004, NEURON, V42, P335, DOI 10.1016/S0896-6273(04)00156-4
Kilner JM, 2003, CURR BIOL, V13, P522, DOI 10.1016/S0960-9822(03)00165-9
Kohler E, 2002, SCIENCE, V297, P846, DOI 10.1126/science.1070311
*KOK CO LTD, 2004, ACTR
Koski L, 2002, CEREB CORTEX, V12, P847, DOI 10.1093/cercor/12.8.847
Kozima H, 2007, PROG BRAIN RES, V164, P385, DOI 10.1016/S0079-6123(07)64021-7
LIBERMAN AM, 1985, COGNITION, V21, P1, DOI 10.1016/0010-0277(85)90021-6
Malikovic A, 2007, CEREB CORTEX, V17, P562, DOI 10.1093/cercor/bhj181
MORTON J, 1991, PSYCHOL REV, V98, P164, DOI 10.1037//0033-295X.98.2.164
Nelissen K, 2005, SCIENCE, V310, P332, DOI 10.1126/science.1115593
*OECD, 2004, POP STAT
Oztop E., 2005, INT J HUM ROBOT, V2, P537, DOI DOI 10.1142/S0219843605000582
Peeters R, 2009, J NEUROSCI, V29, P11523, DOI 10.1523/JNEUROSCI.2040-09.2009
Phan KL, 2002, NEUROIMAGE, V16, P331, DOI 10.1006/nimg.2002.1087
Phillips ML, 2003, BIOL PSYCHIAT, V54, P504, DOI 10.1016/S0006-3223(03)00168-9
Pinker S., 1997, MIND WORKS, VXII
Press C, 2005, COGNITIVE BRAIN RES, V25, P632, DOI
10.1016/j.cogbrainres.2005.08.020
Puce A, 1996, J NEUROSCI, V16, P5205
RHODES G, 1990, PERCEPTION, V19, P773, DOI 10.1068/p190773
Rizzolatti G, 2004, ANNU REV NEUROSCI, V27, P169, DOI
10.1146/annurev.neuro.27.070203.144230
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
Rizzolatti G, 2006, SCI AM, V295, P54, DOI 10.1038/scientificamerican1106-54
Santi A, 2003, J COGNITIVE NEUROSCI, V15, P800, DOI 10.1162/089892903322370726
Saygin AP, 2004, J NEUROSCI, V24, P6181, DOI 10.1523/JNEUROSCI.0504-04.2004
Schulte-Ruther M, 2007, J COGNITIVE NEUROSCI, V19, P1354, DOI
10.1162/jocn.2007.19.8.1354
Skipper JI, 2005, NEUROIMAGE, V25, P76, DOI 10.1016/j.neuroimage.2004.11.006
Steeves JKE, 2006, NEUROPSYCHOLOGIA, V44, P594, DOI
10.1016/j.neuropsychologia.2005.06.013
Tai YF, 2004, CURR BIOL, V14, P117, DOI 10.1016/j.cub.2004.01.005
WADA K, PSYCHOL SOCIAL EFFEC, P2785
Weismann August, 1906, Archiv Rassenbiol Berlin, V3
Wicker B, 2003, NEURON, V40, P655, DOI 10.1016/S0896-6273(03)00679-2
Wright P, 2008, NEUROIMAGE, V39, P894, DOI 10.1016/j.neuroimage.2007.09.014
ZECCA M, 2007, EMOTIONAL EXPRESSION
NR 87
TC 32
Z9 32
U1 4
U2 25
PU PUBLIC LIBRARY SCIENCE
PI SAN FRANCISCO
PA 185 BERRY ST, STE 1300, SAN FRANCISCO, CA 94107 USA
SN 1932-6203
J9 PLOS ONE
JI PLoS One
PD JUL 21
PY 2010
VL 5
IS 7
AR e11577
DI 10.1371/journal.pone.0011577
PG 12
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA 629KQ
UT WOS:000280197500005
PM 20657777
OA gold
DA 2018-01-22
ER

PT J
AU Suleiman, W
Kanehiro, F
Yoshida, E
Laumond, JP
Monin, A
AF Suleiman, Wael
Kanehiro, Fumio
Yoshida, Eiichi
Laumond, Jean-Paul
Monin, Andre
TI Time Parameterization of Humanoid-Robot Paths
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Humanoid robot; optimization methods; robot dynamics; stability
criteria; timing
ID MANIPULATORS; CONSTRAINTS
AB This paper proposes a unified optimization framework to solve the time-
parameterization problem of humanoid-robot paths. Even though the time-
parameterization problem is well known in robotics, the application to humanoid
robots has not been addressed. This is because of the complexity of the kinematical
structure as well as the dynamical motion equation. The main contribution of this
paper is to show that the time parameterization of a statically stable path to be
transformed into a dynamically stable trajectory within the humanoid-robot
capacities can be expressed as an optimization problem. Furthermore, we propose an
efficient method to solve the obtained optimization problem. The proposed method
has been successfully validated on the humanoid robot HRP-2 by conducting several
experiments. These results have revealed the effectiveness and the robustness of
the proposed method.
C1 [Suleiman, Wael; Kanehiro, Fumio; Yoshida, Eiichi] Natl Inst Adv Ind Sci &
Technol, CNRS, JRL, CRT UMI3218, Tsukuba, Ibaraki 3058568, Japan.
[Laumond, Jean-Paul; Monin, Andre] CNRS, LAAS, F-31077 Toulouse, France.
RP Suleiman, W (reprint author), Natl Inst Adv Ind Sci & Technol, CNRS, JRL, CRT
UMI3218, Tsukuba, Ibaraki 3058568, Japan.
EM wael.suleiman@aist.go.jp; f-kanehiro@aist.go.jp; e.yoshida@aist.go.jp;
jpl@laas.fr; monin@laas.fr
RI Yoshida, Eiichi/M-3756-2016; Kanehiro, Fumio/L-8660-2016
OI Yoshida, Eiichi/0000-0002-3077-6964; Kanehiro, Fumio/0000-0002-0277-3467
FU Japan Society for the Promotion of Science (JSPS) [20-08816]; Japan
Science and Technology Agency (JST)-Centre National de la Recherche
Scientifique (CNRS)
FX This work was supported in part by the Japan Society for the Promotion
of Science (JSPS) under Grant-in-Aid 20-08816 for Scientific Research
and in part by the Japan Science and Technology Agency (JST)-Centre
National de la Recherche Scientifique (CNRS) Strategic Japanese-French
Cooperative Program "Robot motion planning and execution through online
information structuring in real-world environment." The work of W.
Suleiman was supported by the JSPS under a postdoctoral research
fellowship and part of his work was done at the Laboratoire d'analyse et
d'architectures des systemes (LAAS)-Centre National de la Recherche
Scientifique (CNRS) during the work on his Ph.D. dissertation. This
paper was presented in part at the 2008 IEEE/RSJ International
Conference on Intelligent RObots and Systems (IROS 2008), Nice, France,
September 22-26.
CR Bertsekas D. P., 1995, NONLINEAR PROGRAMMIN
BOBROW JE, 1985, INT J ROBOT RES, V4, P3, DOI 10.1177/027836498500400301
Jiang K, 1997, ROBOTICA, V15, P547, DOI 10.1017/S0263574797000635
Kanehiro F., 2008, ROB SCI SYST C ZUR S
KANEHIRO F, 2008, IEEE RSJ INT C INT R
Kuffner J, 2001, IEEE INT CONF ROBOT, P692, DOI 10.1109/ROBOT.2001.932631
LAMIRAUX F, 1998, P 5 INT S EXP ROB LO, P301
LENARCIC J, 2006, ADV ROBOT KINEMATICS
LOANDD J, 1999, P COMP AN, P220
PARK FC, 1995, INT J ROBOT RES, V14, P609, DOI 10.1177/027836499501400606
Rajan V., 1985, P INT C ROB AUT ST L, V2, P759
RENAUD M, 1992, ROBOTICS REV, V2, P225
ROCKAFELLAR RT, 1974, SIAM J CONTROL, V12, P268, DOI 10.1137/0312021
ROCKAFELLAR RT, 1993, SIAM REV, V35, P183, DOI 10.1137/1035044
ROCKAFELLAR RT, 1976, P SIAM AMS, V9, P145
ROCKAFELLAR RT, 1973, P 5 IFIP C OPT TEC 1, P418
Sciavicco L., 2000, MODELLING CONTROL RO
Siciliano B., 2000, MODELLING CONTROL RO, P185
SHIN KG, 1985, IEEE T AUTOMAT CONTR, V30, P531
SLOTINE JJE, 1989, IEEE T ROBOTIC AUTOM, V5, P118, DOI 10.1109/70.88024
Sohl G., 2000, RECURSIVE MULTIBODY
Yoshida E., 2007, IEEE RAS 7 INT C HUM
Suleiman W., 2008, P IEEE INT C ROB AUT, P2697
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
NR 24
TC 8
Z9 8
U1 0
U2 5
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD JUN
PY 2010
VL 26
IS 3
BP 458
EP 468
DI 10.1109/TRO.2010.2047531
PG 11
WC Robotics
SC Robotics
GA 607XW
UT WOS:000278541600005
DA 2018-01-22
ER

PT J
AU Nomura, T
Nakao, A
AF Nomura, Tatsuya
Nakao, Akira
TI Comparison on Identification of Affective Body Motions by Robots Between
Elder People and University Students: A Case Study in Japan
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Emotion; Body motion expression; Elderly; Psychological experiment
AB Expressive behaviors based on body motions are one of the useful methods that
social robots present their emotional states toward users. On the other hand, some
psychological research found age dependence on emotion identification in human
facial expressions. In order to investigate this dependence in affective body
expressions of robots, a psychological experiment was conducted in Japan, by using
a small-sized humanoid robot on which three types of affective motion expression
(anger, sadness, and pleasure) were implemented. The results of the experiment,
which consisted of seventeen university student subjects and fifteen elder
subjects, showed differences between younger and elder subjects on emotion
identification, body parts paid attention to, and impressions of motion speed and
magnitude for these affective body motions of the robot. Moreover, the results
suggested correlations between the accuracy of emotion identification and cognitive
bias to the robot's specific body motion parts. Based on these results, the paper
discusses about some implications in human-robot interaction research.
C1 [Nomura, Tatsuya] Ryukoku Univ, Dept Media Informat, Otsu, Shiga 5202194, Japan.
[Nomura, Tatsuya] ATR, Intelligent Robot & Commun Labs, Kyoto 6190288, Japan.
[Nakao, Akira] Ryukoku Univ, Grad Sch Sci & Technol, Otsu, Shiga 5202194, Japan.
RP Nomura, T (reprint author), Ryukoku Univ, Dept Media Informat, 1-5 Yokotani,Seta
Ohe Cho, Otsu, Shiga 5202194, Japan.
EM nomura@rins.ryukoku.ac.jp; t09m076@mail.ryukoku.ac.jp
FU High-Tech Research Center Project for Private Universities; Ministry of
Education, Culture, Sports, Science and Technology (MEXT)
FX The authors deeply thank Dr. Atsunobu Suzuki of Nagoya University for
his providing us with research information on age differences on emotion
identification of human facial expression. The work was supported in
part by the High-Tech Research Center Project for Private Universities
with a matching fund subsidy from Ministry of Education, Culture,
Sports, Science and Technology (MEXT), 2002-2006.
CR ANDERSON J, 1986, BALLET MODERN DANCE
Breazeal C, 2003, INT J HUM-COMPUT ST, V59, P119, DOI 10.1016/S1071-
5819(03)00018-1
Cornelius R. R., 1996, SCI EMOTION RES TRAD
Goetz J, 2003, RO-MAN 2003: 12TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, PROCEEDINGS, P55
Ito K, 2004, P 1 IEEE TECHN EXH B, P35
KEMPER TD, 1990, RES AGENDAS SOCIOLOG
Kidd C. D., 2004, P INT C INT ROB SYST, V4, P3559, DOI [DOI
10.1109/IR0S.2004.1389967, 10.1109/IROS.2004.1389967]
Kim E. H., 2009, P 3 INT C UB INF MAN, P362
MARUI N, 2005, 23 ANN C ROB SOC JAP
Mutlu B., 2006, P 15 IEEE INT S ROB, P74, DOI DOI 10.1109/ROMAN.2006.314397
Nakata T., 2001, J ROBOTICS SOC JAPAN, V19, P252
Nomura T, 2008, IEEE T ROBOT, V24, P442, DOI 10.1109/TRO.2007.914004
Nomura T, 2006, INTERACT STUD, V7, P437, DOI 10.1075/is.7.3.14nom
Plutchik R., 1984, APPROACHES EMOTION, P197
Scopelliti M., 2005, Universal Access in the Information Society, V4, P146, DOI
10.1007/s10209-005-0118-1
Wong B, 2005, NEUROPSYCHOLOGY, V19, P739, DOI 10.1037/0894-4105.19.6.739
Yuk N-S, 2008, P INT C CONTR AUT SY, P2350
NR 17
TC 9
Z9 9
U1 0
U2 4
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD JUN
PY 2010
VL 2
IS 2
BP 147
EP 157
DI 10.1007/s12369-010-0050-2
PG 11
WC Robotics
SC Robotics
GA V31OR
UT WOS:000208893400004
OA gold
DA 2018-01-22
ER

PT J
AU Ugurlu, B
Kawamura, A
AF Ugurlu, Barkan
Kawamura, Atsuo
TI ZMP-Based Online Jumping Pattern Generation for a One-Legged Robot
SO IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
LA English
DT Article
DE Angular momentum; Euler's equations; jumping robot; polar coordinates;
zero moment point (ZMP)
ID HUMANOID ROBOT
AB jumping patterns, which can be applied to one-legged jumping robots and
optionally to humanoid robots. Our proposed method is based on ensuring the overall
dynamic balance through the complete jumping cycle. To be able to reach this goal,
we discretized the zero moment point equation in polar coordinates so that we are
able to include angular momentum information in a natural way. Thus, undesired
torso angle fluctuation is expected to be more restrainable compared to other
methods in which angular momentum information is ignored or zero referenced.
Moreover, we unified support and flight phases in terms of motion generation.
Having obtained successful simulation results and vertical jumping experiments in
our previous work, we conducted forward jumping experiments. As the result, we
obtained successful and repetitive jumping cycles, which satisfactorily verify the
proposed method.
C1 [Ugurlu, Barkan; Kawamura, Atsuo] Yokohama Natl Univ, Dept Elect & Comp Engn,
Yokohama, Kanagawa 2408501, Japan.
RP Ugurlu, B (reprint author), Yokohama Natl Univ, Dept Elect & Comp Engn,
Yokohama, Kanagawa 2408501, Japan.
EM barkanu@kawalab.dnj.ynu.ac.jp; kawamura@kawalab.dnj.ynu.ac.jp
RI kawamura, atsuo/L-8158-2014; Ugurlu, Barkan/A-4793-2015
OI kawamura, atsuo/0000-0002-3085-2314; Ugurlu, Barkan/0000-0002-9124-7441
CR Erbatur K, 2009, IEEE T IND ELECTRON, V56, P835, DOI 10.1109/TIE.2008.2005150
Fu CL, 2008, IEEE T IND ELECTRON, V55, P2111, DOI 10.1109/TIE.2008.921205
Fujimoto Y, 1998, IEEE ROBOT AUTOM MAG, V5, P33, DOI 10.1109/100.692339
Gregorio P, 1997, IEEE T SYST MAN CY B, V27, P626, DOI 10.1109/3477.604106
GUPTA P, 2006, P IEEE C IND INF SYS, P247
Jensen B, 2005, IEEE T IND ELECTRON, V52, P1530, DOI 10.1109/TIE.2005.858730
Kagami S, 2002, AUTON ROBOT, V12, P71, DOI 10.1023/A:1013210909840
KAJITA S, 2004, P IEEE INT C ROB AUT, P629
KAJITA S, 2005, P IEEE INT C ROB AUT, P616
Kajita S, 2007, IEEE INT CONF ROBOT, P3963, DOI 10.1109/ROBOT.2007.364087
KAWAMURA A, 2006, P IEEE RAS INT C HUM, P599
Li Q., 1991, P IEEE RSJ INT WORKS, P1568
Maeda T, 2008, AMC '08: 10TH INTERNATIONAL WORKSHOP ON ADVANCED MOTION CONTROL,
VOLS 1 AND 2, PROCEEDINGS, P301
Minakata H, 2008, IEEE T IND ELECTRON, V55, P1271, DOI 10.1109/TIE.2007.911919
Motoi N, 2009, IEEE T IND ELECTRON, V56, P54, DOI 10.1109/TIE.2008.2004663
Press W. H, 1992, NUMERICAL RECIPES C
RAIBERT MH, 1986, LEGGED ROBOT BALANCE, P57
TAJIMA R, 2006, P IEEE C INT ROB SYS, P1726
Takeda T, 2007, IEEE T IND ELECTRON, V54, P699, DOI 10.1109/TIE.2007.891642
TIMOSHENKO S, 1948, ADV DYNAMICS, P359
*UEFA, WE AR ROB
UGURLU B, 2008, P INT C CLIN WALK RO, P1061
UGURLU B, 2008, P IEEE C IND EL CONT, P1668
UGURLU B, 2009, P IEEE C INT ROB SYS, P1100
UGURLU B, 2007, P 25 JPN ROB SOC
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
XU F, 2008, P JPN I EL ENG TECH, P7
NR 27
TC 16
Z9 18
U1 0
U2 12
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 0278-0046
EI 1557-9948
J9 IEEE T IND ELECTRON
JI IEEE Trans. Ind. Electron.
PD MAY
PY 2010
VL 57
IS 5
BP 1701
EP 1709
DI 10.1109/TIE.2009.2032439
PG 9
WC Automation & Control Systems; Engineering, Electrical & Electronic;
Instruments & Instrumentation
SC Automation & Control Systems; Engineering; Instruments & Instrumentation
GA 583MH
UT WOS:000276680000023
DA 2018-01-22
ER

PT J
AU Hosoda, K
Sakaguchi, Y
Takayama, H
Takuma, T
AF Hosoda, Koh
Sakaguchi, Yuki
Takayama, Hitoshi
Takuma, Takashi
TI Pneumatic-driven jumping robot with anthropomorphic muscular skeleton
structure
SO AUTONOMOUS ROBOTS
LA English
DT Article
DE Muscular skeleton system; Pneumatic artificial muscles; Humanoid robot;
Jumping; Biarticular muscles
ID PASSIVE-DYNAMIC WALKERS; MUSCLES; WALKING; DESIGN
AB Human muscular skeleton structure plays an important role for adaptive
locomotion. Understanding of its mechanism is expected to be used for realizing
adaptive locomotion of a humanoid robot as well. In this paper, a jumping robot
driven by pneumatic artificial muscles is designed to duplicate human leg structure
and function. It has three joints and nine muscles, three of them are biarticular
muscles. For controlling such a redundant robot, we take biomechanical findings
into account: biarticular muscles mainly contribute to joint coordination whereas
monoarticular muscles contribute to provide power. Through experiments, we find (1)
the biarticular muscles realize coordinated movement of joints when knee and/or hip
is extended, (2) the extension of the ankle does not lead to coordinated movement,
and (3) we can superpose extension of the knee with that of the hip without losing
the joint coordination. The obtained knowledge can be used not only for robots, but
may also contribute to understanding of adaptive human mechanism.
C1 [Hosoda, Koh; Sakaguchi, Yuki; Takayama, Hitoshi] Osaka Univ, Grad Sch Engn,
Dept Adapt Machine Syst, Suita, Osaka, Japan.
[Takuma, Takashi] Osaka Inst Technol, Dept Elect & Elect Syst Engn, Osaka 535,
Japan.
RP Hosoda, K (reprint author), Osaka Univ, Grad Sch Engn, Dept Adapt Machine Syst,
Suita, Osaka, Japan.
EM hosoda@ams.eng.osaka-u.ac.jp; takuma@ee.oit.ac.jp
FU Japanese Ministry of Education, Culture, Sports, Science and Technology
FX This work is partly supported by a Grant-in-Aid for Scientific Research
on Priority Areas "Emergence of Adaptive Motor Function through
Interaction between Body, Brain and Environment" from the Japanese
Ministry of Education, Culture, Sports, Science and Technology.
CR Ahmadi M, 1997, IEEE T ROBOTIC AUTOM, V13, P96, DOI 10.1109/70.554350
BLICKHAN R, 1989, J BIOMECH, V22, P1217, DOI 10.1016/0021-9290(89)90224-8
Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Doorenbosch CAM, 1997, BRAIN RES, V751, P239, DOI 10.1016/S0006-8993(96)01327-3
Fukashiro S, 2005, INT J SPORT HLTH SCI, V3, P272, DOI DOI 10.5432/IJSHS.3.272
Hosoda K, 2008, ROBOT AUTON SYST, V56, P46, DOI 10.1016/j.robot.2007.09.010
Hurst JW, 2007, IEEE INT CONF ROBOT, P1863, DOI 10.1109/ROBOT.2007.363593
HYON S, 2002, P 2002 INT C ROB AUT, P3948
Iida F, 2007, IEEE INT CONF ROBOT, P3970, DOI 10.1109/ROBOT.2007.364088
Jacobs R, 1996, J BIOMECH, V29, P513, DOI 10.1016/0021-9290(95)00067-4
KAJITA S, 2005, P 2004 IEEE INT C RO
KODITSCHECK DE, 1991, INT J ROBOT RES, V10, P269
NAGASAKA K, 2004, P IEEE INT C ROB AUT, P3189
Niiyama R., 2008, 4 INT S AD MOT AN MA
Raibert M.H., 1986, LEGGED ROBOTS BALANC
Seyfarth A, 2002, J BIOMECH, V35, P649, DOI 10.1016/S0021-9290(01)00245-7
van der Linde RQ, 1999, IEEE T ROBOTIC AUTOM, V15, P599, DOI 10.1109/70.781963
Vanderborght B, 2008, ADV ROBOTICS, V22, P1027, DOI 10.1163/156855308X324749
Voronov AV., 2004, HUMAN PHYSL, V30, P476
Wisse M, 2005, IEEE T ROBOT, V21, P393, DOI 10.1109/TRO.2004.838030
Wisse M, 2004, ROBOTICA, V22, P681, DOI 10.1017/S0263574704000475
NR 21
TC 51
Z9 51
U1 2
U2 37
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 0929-5593
EI 1573-7527
J9 AUTON ROBOT
JI Auton. Robot.
PD APR
PY 2010
VL 28
IS 3
SI SI
BP 307
EP 316
DI 10.1007/s10514-009-9171-6
PG 10
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA 567OB
UT WOS:000275455400006
DA 2018-01-22
ER

PT J
AU Solis, J
Takanishi, A
AF Solis, Jorge
Takanishi, Atsuo
TI Recent Trends in Humanoid Robotics Research: Scientific Background,
Applications, and Implications
SO ACCOUNTABILITY IN RESEARCH-POLICIES AND QUALITY ASSURANCE
LA English
DT Article
DE entertainment robotics; humanoid robotics; medical robotics; roboethics
ID FEEDBACK; PERFORMANCE; SKILLS; SURGERY; MODEL
AB Even though the market size is still small at this moment, applied fields of
robots are gradually spreading from the manufacturing industry to the others as one
of the important components to support an aging society. For this purpose, the
research on human-robot interaction (HRI) has been an emerging topic of interest
for both basic research and customer application. The studies are especially
focused on behavioral and cognitive aspects of the interaction and the social
contexts surrounding it. As a part of these studies, the term of oroboethicso has
been introduced as an approach to discuss the potentialities and the limits of
robots in relation to human beings. In this article, we describe the recent
research trends on the field of humanoid robotics. Their principal applications and
their possible impact are discussed.
C1 [Solis, Jorge; Takanishi, Atsuo] Waseda Univ, Fac Sci & Engn, Humanoid Robot
Inst, Tokyo 1698555, Japan.
RP Solis, J (reprint author), Waseda Univ, Fac Sci & Engn, Humanoid Robot Inst, 3-
4-1 Ookubo, Tokyo 1698555, Japan.
EM solis@ieee.org
FU Gifu Prefecture; Ministry of Education, Culture, Sports, Science, and
Technology of Japan; JSPS, KAKENHI [21520004]
FX A part of the research on the Musical Instrument-Playing Robot was done
at the HRI, Waseda University, and the Center for Advanced Biomedical
Sciences (TWINs). This research is supported (in part) by a Gifu-in-Aid
for the WABOT-HOUSE Project by Gifu Prefecture. This work is also
supported in part by Global COE Program "Global Robot Academia" from the
Ministry of Education, Culture, Sports, Science, and Technology of
Japan. WF-4RIV and WAS-2 have been designed by 3D CAD design software
SolidWorks. Special thanks to SolidWorks Japan K. K. for the software
contribution. The research on the Medical Assisted-Training HR was
supported by the Knowledge Cluster Initiative, a project from the
Ministry of Education, Culture, Sports, Science, and Technology of
Japan. We would like also to thank the doctors from the Department of
Anesthesiology at the Tokyo Women's Medical University for their
valuable time in providing medical knowledge for the WKA-3. Finally,
this study is in part supported by JSPS, KAKENHI (Grants-in-Aid for
Scientific Research), No. 21520004 (K. Ishihara, PI).
CR Asada M, 2001, ROBOT AUTON SYST, V37, P185, DOI 10.1016/S0921-8890(01)00157-9
Brooks R, 2002, FLESH MACHINES ROBOT
Brydges R, 2008, J AM COLL SURGEONS, V206, P205, DOI
10.1016/j.jamcollsurg.2007.07.045
CAPURRO R, 2000, INT INF LIBR REV, V32, P257
Cassell J., 2000, EMBODIED CONVERSATIO
Chang L, 2003, SURG ENDOSC, V17, P1744, DOI 10.1007/s00464-003-8813-6
Dautenhahn K, 1997, CYBERNET SYST, V28, P417, DOI 10.1080/019697297126074
Dautenhahn K., 2002, SOCIALLY INTELLIGENT
Delson N., 2003, J CLIN ENG, V28, P211
Ezra DG, 2009, OPHTHALMOLOGY, V116, P257, DOI 10.1016/j.ophtha.2008.09.038
Fong T, 2003, ROBOT AUTON SYST, V42, P143, DOI 10.1016/S0921-8890(02)00372-X
Francis Paula, 2006, Urol Nurs, V26, P99
Goertz R. C., 1964, P 12 C REM SYST TECH, P117
ISHIDA T, 2001, P IEEE RSJ INT C INT, V2, P1079
KATO I, 2002, ROBOTICS, V3, P143
Kato I., 1973, P CISM IFTOMM S THEO, P12
KAWATO M, 1992, BIOL CYBERN, V68, P95, DOI 10.1007/BF00201431
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
Levy David, 2007, LOVE SEX ROBOTS EVOL
MASUDA M, 2004, P INT S SOC US ANTHR
Mayrose J, 2003, RESUSCITATION, V59, P133, DOI 10.1016/S0300-9572(03)00179-5
NOH Y, 2009, P INT C ROB AUT KOB, P3843
Noh Y., 2010, P 18 IEEE INT C NETW, P183
Noh Y, 2008, P IEEE RAS-EMBS INT, P574, DOI 10.1109/BIOROB.2008.4762799
PICARD ROSALIND W., 1997, AFFECTIVE COMPUTING
RALSTON A, 1993, ENCY COMPUTER SCI, P1167
Rosen J, 2006, IEEE T BIO-MED ENG, V53, P399, DOI 10.1109/TBME.2005.869771
RUCCI M, 1997, P 1997 IEEE INT C RO, V1, P443
Scassellati B, 2002, AUTON ROBOT, V12, P13, DOI 10.1023/A:1013298507114
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
SCHEUTZ M, 2009, P ICRA2009 WORKSH RO, P20
Sewell C, 2008, COMPUT AIDED SURG, V13, P63, DOI 10.3109/10929080801957712
Solis J., 2008, MECH MACH THEORY, V44, P527
Solis J., 2010, P INT C ROB AUT, P42
SOLIS J, 2004, THESIS PERCEPTUAL RO
Solis J, 2009, IEEE INT CON AUTO SC, P591, DOI 10.1109/COASE.2009.5234174
Solis J, 2009, ADV ROBOTICS, V23, P1849, DOI [10.1163/016918609X12518783330207,
10.1163/016918609\12518783330207]
Tojo T., 2000, P 2000 IEEE INT C SY, V2, P858
*TRUECOMPANION, 2010, TRUE COMP CORP ROXXX
Veruggio G., 2008, SPRINGER HDB ROBOTIC, P1499
Xeroulis GJ, 2007, SURGERY, V141, P442, DOI 10.1016/j.surg.2006.09.012
NR 41
TC 0
Z9 0
U1 0
U2 9
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0898-9621
J9 ACCOUNT RES
JI Account. Res.
PY 2010
VL 17
IS 6
SI SI
BP 278
EP 298
AR PII 929385227
DI 10.1080/08989621.2010.523673
PG 21
WC Medical Ethics
SC Medical Ethics
GA 678WV
UT WOS:000284114800002
PM 21069592
DA 2018-01-22
ER

PT J
AU Mombaur, K
Laumond, JP
Yoshida, E
AF Mombaur, Katja
Laumond, Jean-Paul
Yoshida, Eiichi
TI An Optimal Control-Based Formulation to Determine Natural Locomotor
Paths for Humanoid Robots
SO ADVANCED ROBOTICS
LA English
DT Article
DE Biologically inspired path planning; optimal control; locomotion;
humanoid; holonomic and non-holonomic walking
ID HUMAN WALKING; MOTION; ANIMATION; SPEED
AB In this paper we explore the underlying principles of natural locomotion path
generation of human beings. The knowledge of these principles is useful to
implement biologically inspired path planning algorithms on a humanoid robot. By
'locomotion path' we denote the motion of the robot as a whole in the plane. The
key to our approach is to formulate the path planning problem as an optimal control
problem. We propose a single dynamic model valid for all situations, which includes
both non-holonomic and holonomic modes of locomotion, as well as an appropriately
designed unified objective function. The choice between holonomic and non-holonomic
behavior is not accomplished by a switching model, but it appears in a smooth way,
along with the optimal path, as a result of the optimization by efficient numerical
techniques. The proposed model and objective function are successfully tested in
six different locomotion scenarios. The resulting paths are implemented on the HRP-
2 robot in the simulation environment OpenHRP as well as in the experiment on the
real robot. (C) Koninklijke Brill NV, Leiden and The Robotics Society of Japan,
2010
C1 [Mombaur, Katja; Laumond, Jean-Paul; Yoshida, Eiichi] Univ Toulouse, CNRS, LAAS,
Joint French Japanese Robot Lab, F-31077 Toulouse, France.
[Yoshida, Eiichi] Natl Inst Adv Ind Sci & Technol, CNRS, AIST Joint Robot Lab,
CRT,UMI 3218, Tsukuba, Ibaraki 3058568, Japan.
RP Mombaur, K (reprint author), Univ Toulouse, CNRS, LAAS, Joint French Japanese
Robot Lab, 7 Ave Colonel Roche, F-31077 Toulouse, France.
EM kmombaur@laas.fr
RI Yoshida, Eiichi/M-3756-2016
OI Yoshida, Eiichi/0000-0002-3077-6964
FU Agence Nationale de Recherche
FX We thank the Simulation & Optimization group at IWR, University of
Heidelberg for providing the optimal control package MUSCOD. Financial
support by the Agence Nationale de Recherche within the project
"Locanthrope" is gratefully acknowledged.
CR Arechavaleta G, 2008, AUTON ROBOT, V25, P25, DOI 10.1007/s10514-007-9075-2
Arechavaleta G, 2008, IEEE T ROBOT, V24, P5, DOI 10.1109/TRO.2008.915449
Azevedo C, 2007, BIOL CYBERN, V96, P209, DOI 10.1007/s00422-006-0118-0
Bock H., 1987, BONNER MATH SCHRIFTE, V183
Bock H., 1984, P 9 IFAC WORLD C BUD, P242
Carlsen AN, 2005, NEUROSCI LETT, V384, P217, DOI 10.1016/j.neulet.2005.04.071
CHESTNUTT J, 2005, P IEEE INT C ROB AUT, P629
Choi MG, 2003, ACM T GRAPHIC, V22, P182, DOI 10.1145/636886.636889
Courtine G, 2003, EUR J NEUROSCI, V18, P177, DOI 10.1046/j.1460-
9568.2003.02736.x
England SA, 2007, GAIT POSTURE, V25, P172, DOI 10.1016/j.gaitpost.2006.03.003
Esteves C, 2006, ACM T GRAPHIC, V25, P319, DOI 10.1145/1138450.1138457
Gienger M., 2005, P IEEE RAS INT C HUM, P238
GUTMANN JS, 2005, P INT JOINT C ART IN, P1232
Hicheur H, NEUROSCI LETT, V383
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
Kang HG, 2008, GAIT POSTURE, V27, P572, DOI 10.1016/j.gaitpost.2007.07.009
Kovar L, 2002, P 29 ANN C COMP GRAP, P473
Latombe J.-C., 1991, ROBOT MOTION PLANNIN
Laumond J.P., 1998, LECT NOTES CONTROL I
LaValle S. M., 2006, PLANNING ALGORITHMS
Leineweber DB, 2003, COMPUT CHEM ENG, V27, P157, DOI 10.1016/S0098-
1354(02)00158-8
MOMBAUR K, 2010, AUTONOMOUS IN PRESS
Mombaur KD, 2005, IEEE T ROBOT, V21, P1148, DOI 10.1109/TRO.2005.855990
Mombaur KD, 2005, ROBOTICA, V23, P21, DOI 10.1017/S026357470400058X
Multon F, 2008, PRESENCE-TELEOP VIRT, V17, P17, DOI 10.1162/pres.17.1.17
Orendurff MS, 2006, GAIT POSTURE, V23, P106, DOI 10.1016/j.gaitpost.2004.12.008
PATLA A, 2000, P INT S AD MOT AN MA
Pettre J, 2006, COMPUT ANIMAT VIRT W, V17, P109, DOI 10.1002/cav.76
Reynolds C., 1987, COMPUT GRAPH, V21, P25, DOI DOI 10.1145/37402.37406
Rose C, 1998, IEEE COMPUT GRAPH, V18, P32, DOI 10.1109/38.708559
SCHULTZ G, 2010, IEEE T MECH IN PRESS
Stasse O., 2006, P IEEE RSJ INT C INT, P348
Todorov E, 2004, NAT NEUROSCI, V7, P907, DOI 10.1038/nn1309
Verkerke GJ, 2005, J BIOMECH, V38, P1881, DOI 10.1016/j.jbiomech.2004.08.015
Witkin A., 1995, P 22 ANN C COMP GRAP, P105, DOI DOI 10.1145/218380.218422
Wolpert DM, 2000, NAT NEUROSCI, V3, P1212, DOI 10.1038/81497
Yamane K, 2003, IEEE T VIS COMPUT GR, V9, P352, DOI 10.1109/TVCG.2003.1207443
Yoshida E., 2006, P IEEE RAS INT C HUM, P1
Yoshida E, 2008, IEEE T ROBOT, V24, P1186, DOI 10.1109/TRO.2008.2002312
NR 41
TC 3
Z9 3
U1 0
U2 3
PU VSP BV
PI LEIDEN
PA BRILL ACADEMIC PUBLISHERS, PO BOX 9000, 2300 PA LEIDEN, NETHERLANDS
SN 0169-1864
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2010
VL 24
IS 4
BP 515
EP 535
DI 10.1163/016918610X487090
PG 21
WC Robotics
SC Robotics
GA 579JJ
UT WOS:000276365600003
DA 2018-01-22
ER

PT J
AU Herdt, A
Diedam, H
Wieber, PB
Dimitrov, D
Mombaur, K
Diehl, M
AF Herdt, Andrei
Diedam, Holger
Wieber, Pierre-Brice
Dimitrov, Dimitar
Mombaur, Katja
Diehl, Moritz
TI Online Walking Motion Generation with Automatic Footstep Placement
SO ADVANCED ROBOTICS
LA English
DT Article
DE Walking humanoid robot; linear model predictive control
ID ACTIVE SET STRATEGY
AB The goal of this paper is to demonstrate the capacity of model predictive
control (MPC) to generate stable walking motions without the use of predefined
footsteps. Building up on well-known MPC schemes for walking motion generation, we
show that a minimal modification of these schemes allows designing an online
walking motion generator that can track a given reference speed of the robot and
decide automatically the footstep placement. Simulation results are proposed on the
HRP-2 humanoid robot, showing a significant improvement over previous approaches.
(C) Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2010
C1 [Herdt, Andrei; Wieber, Pierre-Brice; Dimitrov, Dimitar] INRIA, Tsukuba, Ibaraki
3058568, Japan.
[Herdt, Andrei; Wieber, Pierre-Brice; Dimitrov, Dimitar] AIST, CNRS, Tsukuba,
Ibaraki 3058568, Japan.
[Diedam, Holger; Mombaur, Katja] Heidelberg Univ, D-69120 Heidelberg, Germany.
[Diehl, Moritz] Katholieke Univ Leuven, B-3001 Louvain, Belgium.
RP Wieber, PB (reprint author), INRIA, AIST Tsukuba Cent 2,1-1-1 Umezono, Tsukuba,
Ibaraki 3058568, Japan.
EM Pierre-Brice.Wieber@inria.fr
RI Diehl, Moritz/K-9537-2015
FU French Agence Nationale de la Recherche [ANR-08-JCJC-0075-01]; Research
Council KUL [CoE EF/05/006]; German Academic Exchange Service (DAAD);
Programme Hubert Curien; European Community; foundation Landesstiftung
Baden-Wurttemberg
FX The authors would like to thank Olivier Stasse without whom the
comparison in Fig. 7 would not have been possible. A. H. and P.-B. W.
would like to thank Kazuhito Yokoi and Aderrahmane Kheddar for hosting
them in the CNRS/AIST Joint Robotics Lab in Tsukuba, Japan. This
research was supported by the French Agence Nationale de la Recherche,
grant ANR-08-JCJC-0075-01. This research was also supported by Research
Council KUL: CoE EF/05/006 Optimization in Engineering Center (OPTEC)
and the German-French cooperation program Procope coordinated by the
German Academic Exchange Service (DAAD) and the Programme Hubert Curien.
The research of P.-B. W. was supported by a Marie Curie International
Outgoing Fellowship within the 7th European Community Framework
Programme. The research of K. M. was supported by the Postdoc Program of
the foundation Landesstiftung Baden-Wurttemberg.
CR DIEDAM H, 2008, P IEEE RSJ INT C INT, P1121
Diehl M, 2001, THESIS U HEIDELBERG
DIMITROV D, 2009, P IEEE INT C ROB AUT, P1171
DIMITROV D, 2008, P IEEE INT C ROB AUT, P2685
Ferreau HJ, 2008, INT J ROBUST NONLIN, V18, P816, DOI 10.1002/rnc.1251
Ferreau HJ, 2007, ANNU REV CONTROL, V31, P293, DOI
10.1016/j.arcontrol.2007.09.001
KAJITA S, 1991, 1991 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS 1-3, P1405, DOI 10.1109/ROBOT.1991.131811
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
Kanzakis S, 2005, P INT C HUM ROB TSUK, P301
Morisawa M, 2007, IEEE INT CONF ROBOT, P3989, DOI 10.1109/ROBOT.2007.364091
NAGASAKA K, 2004, P IEEE INT C ROB AUT, P3189
Nishiwaki K, 2006, IEEE INT CONF ROBOT, P2667, DOI 10.1109/ROBOT.2006.1642104
Schittkowski K., 2005, QL FORTRAN CODE CONV
SUGIHARA T, 2008, P IEEE INT C ROB AUT, P1264
Takeuchi H, 2003, J ROBOTIC SYST, V20, P3, DOI 10.1002/rob.10065
Wieber P. B., 2006, P IEEE RAS INT C HUM, P137, DOI DOI 10.1109/ICHR.2006.321375
WIEBER PB, 2008, P IEEE RSJ INT C INT, P1103
Wieber P.-B., 2002, P INT WORKSH HUM HUM
Wieber PB, 2006, ROBOT AUTON SYST, V54, P559, DOI 10.1016/j.robot.2006.04.007
NR 20
TC 76
Z9 78
U1 1
U2 5
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2010
VL 24
IS 5-6
BP 719
EP 737
DI 10.1163/016918610X493552
PG 19
WC Robotics
SC Robotics
GA 600VI
UT WOS:000278016600006
DA 2018-01-22
ER

PT J
AU Zhu, C
Kawamura, A
AF Zhu, Chi
Kawamura, Atsuo
TI Development of Biped Robot MARI-3 for Jumping
SO ADVANCED ROBOTICS
LA English
DT Article
DE Biped robot; humanoid; jumping; running
AB A biped robot, MARI-3, for jumping is developed, of which the ultimate objective
is fast walking and running. Its mechanical structure including the joint
configuration and specification, the knee joint, and the speed reduction mechanism
are described in detail. A specific control system RON (RObot Network) for MARI-3
that is a serial and distributed network and consists of a microcontroller, host
unit, servo units, sensor units and servo amplifiers is presented as well as the
sensor system. With the developed biped robot MARI-3, one-leg jumping of 110 ms
jumping time and 4.0 cm jumping height was implemented as initiative and
verification experiments. Furthermore, by comparison of MARI-3 with other jumping
or running robots, MARI-3's potential ability for fast walking, jumping and running
becomes clear. (C) Koninklijke Brill NV, Leiden and The Robotics Society of Japan,
2010
C1 [Zhu, Chi] Maebashi Inst Technol, Fac Engn, Dept Syst Life Engn, Gunma 3710816,
Japan.
[Kawamura, Atsuo] Yokohama Natl Univ, Dept Elect & Comp Engn, Grad Sch Engn,
Hodogaya Ku, Yokohama, Kanagawa 2408501, Japan.
RP Zhu, C (reprint author), Maebashi Inst Technol, Fac Engn, Dept Syst Life Engn,
Kamisadori 460-1, Gunma 3710816, Japan.
EM zhu@maebashi-it.ac.jp
RI kawamura, atsuo/L-8158-2014
OI kawamura, atsuo/0000-0002-3085-2314
CR Asano Y., 2004, P 8 IEEE INT WORKSH, P399
CHUO B, 2008, P IEEE INT C HUM ROB, P299
FUJIMOTO Y, 1995, IEEE INT CONF ROBOT, P2877, DOI 10.1109/ROBOT.1995.525692
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hirano T., 2000, P 6 INT WORKSH ADV M, P606
KAJITA S, 2004, P IEEE INT C ROB AUT, P629
KANEHIRO F, 2003, J ROBOTICS SOC JAPAN, V21, P201
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
Kim JY, 2005, P IEEE INT C ROB AUT, P1443
LOHMEIER S, 2004, P IEEE INT C ROB AUT, P4222
NAKASAKA K, 2004, P 2004 IEEE ICRA, P3189
OGURA Y, 2004, P IEEE INT C ROB AUT, P134
*ROB SOC JAP, 2005, ROB HDB, P1103
TAJIMA R, 2006, P IEEE C INT ROB SYS, P1726
Takanishi A., 1985, J ROBOTICS SOC JAPAN, V3, P325
Zhu C., 2006, P IEEE RSJ INT C INT, P5762
ZHU C, 2004, P 8 IEEE INT WORKSH, P427
NR 17
TC 4
Z9 4
U1 0
U2 6
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2010
VL 24
IS 11
BP 1661
EP 1675
DI 10.1163/016918610X512640
PG 15
WC Robotics
SC Robotics
GA 647FF
UT WOS:000281602500007
DA 2018-01-22
ER

PT J
AU Omrcen, D
Ude, A
AF Omrcen, Damir
Ude, Ales
TI Redundancy Control of a Humanoid Head for Foveation and
Three-Dimensional Object Tracking: A Virtual Mechanism Approach
SO ADVANCED ROBOTICS
LA English
DT Article
DE Active stereo vision; foveated vision; humanoid head; virtual mechanism;
redundancy resolution
ID OBSTACLE AVOIDANCE; MANIPULATORS; CALIBRATION; ROBOTS; OPTIMIZATION;
VISION
AB This paper presents a novel approach for object tracking with a humanoid robot
head. The proposed approach is based on the concept of a virtual mechanism, where
the real head is enhanced with a virtual link that connects the eye with a point in
three-dimensional space. We tested our implementation on a humanoid head with 7
d.o.f. and two rigidly connected cameras in each eye (wide-angle and telescopic).
The experimental results show that the proposed control algorithm can be used to
maintain the view of an observed object in the foveal (telescopic) image using
information from the peripheral view. Unlike other methods proposed in the
literature, our approach indicates how to exploit the redundancy of the robot head.
The proposed technique is systematic and can be easily implemented on different
types of active humanoid heads. The results show good tracking performance
regardless of the distance between the object and the head. Moreover, the
uncertainties in the kinematic model of the head do not affect the performance of
the system. (C) Koninklijke Brill NV, Leiden and The Robotics Society of Japan,
2010
C1 [Omrcen, Damir; Ude, Ales] Jozef Stefan Inst, Ljubljana 1000, Slovenia.
[Ude, Ales] ATR Computat Neurosci Labs, Seika, Kyoto 6190288, Japan.
RP Omrcen, D (reprint author), Jozef Stefan Inst, Jamova Cesta 39, Ljubljana 1000,
Slovenia.
EM damir.omrcen@ijs.si
OI Ude, Ales/0000-0003-3677-3972
FU European Commission [FP6-2004-IST-4-027657]
FX The work described in this paper was partially conducted within the EU
Cognitive Systems project PACO-PLUS (FP6-2004-IST-4-027657) funded by
the European Commission.
CR Asfour T., 2006, P IEEE RAS INT C HUM, P169
Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
Bernardino A, 2002, ROBOT AUTON SYST, V39, P205, DOI 10.1016/S0921-
8890(02)00205-1
Bernardino A, 1999, IEEE T ROBOTIC AUTOM, V15, P1080, DOI 10.1109/70.817671
Breazeal C, 2001, IEEE T SYST MAN CY A, V31, P443, DOI 10.1109/3468.952718
Burghardt G.M., 1991, P53
Carpenter RHS, 1988, MOVEMENTS EYES
CHANG KS, 2000, P IEEE INT C ROB AUT, P850
Frith U., 2003, AUTISM EXPLAINING EN
Gams A, 2009, AUTON ROBOT, V27, P3, DOI 10.1007/s10514-009-9118-y
Hollerbach J. M., 1987, J ROBOTICS AUTOMATIO, V3, P308
KHATIB O, 1986, INT J ROBOT RES, V5, P90, DOI 10.1177/027836498600500106
Metta G, 2004, MECHATRONICS, V14, P989, DOI 10.1016/j.mechatronics.2004.05.003
NEMEC B, 2009, CONT ROBOTICS CHALLE, pCH1
NENCHEV DN, 1989, J ROBOTIC SYST, V6, P769, DOI 10.1002/rob.4620060607
NUMMENMAA T, 1964, STUDIES ED PSYCHOL S, V9
Panerai F, 2000, ROBOT AUTON SYST, V30, P195, DOI 10.1016/S0921-8890(99)00072-X
PARK FC, 1994, IEEE T ROBOTIC AUTOM, V10, P717, DOI 10.1109/70.326576
POVINELLI DJ, 1995, TRENDS NEUROSCI, V18, P418, DOI 10.1016/0166-2236(95)93939-U
Ristau C. A., 1991, NATURAL THEORIES MIN, P209
Scassellati B, 1998, FIFTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE
(AAAI-98) AND TENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICAL INTELLIGENCE
(IAAI-98) - PROCEEDINGS, P969
Shibata T, 2001, ADAPT BEHAV, V9, P189, DOI 10.1177/10597123010093005
SHIU YC, 1989, IEEE T ROBOTIC AUTOM, V5, P16, DOI 10.1109/70.88014
TANG CY, 1998, LECT NOTES COMP SCI, V1351, P632
THAYER S, 1977, DEV PSYCHOL, V13, P673, DOI 10.1037//0012-1649.13.6.673
UDE A, 2009, P 14 INT C ADV ROB I, P1
UDE A, 2006, P IEEE INT C ROB AUT, P2378
YAMATO J, 1998, THESIS DEP ELECT ENG
Yoshikawa T, 1996, LAB ROBOTICS AUTOMAT, V8, P49
Zhang ZY, 2000, IEEE T PATTERN ANAL, V22, P1330, DOI 10.1109/34.888718
Zlajpah L, 2003, ROBOTICA, V21, P633, DOI 10.1017/S0263574703005186
NR 31
TC 4
Z9 4
U1 0
U2 9
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2010
VL 24
IS 15
BP 2171
EP 2197
DI 10.1163/016918610X534303
PG 27
WC Robotics
SC Robotics
GA 685AA
UT WOS:000284595200006
DA 2018-01-22
ER

PT J
AU Yoshida, E
Poirier, M
Laumond, JP
Kanoun, O
Lamiraux, F
Alami, R
Yokoi, K
AF Yoshida, Eiichi
Poirier, Mathieu
Laumond, Jean-Paul
Kanoun, Oussama
Lamiraux, Florent
Alami, Rachid
Yokoi, Kazuhito
TI Pivoting based manipulation by a humanoid robot
SO AUTONOMOUS ROBOTS
LA English
DT Article
DE Manipulation; Humanoid; Motion planning; Whole-body motion
ID MOTION; PATHS; CAR
AB In this paper we address whole-body manipulation of bulky objects by a humanoid
robot. We adopt a "pivoting" manipulation method that allows the humanoid to
displace an object without lifting, but by the support of the ground contact.
First, the small-time controllability of pivoting is demonstrated. On its basis, an
algorithm for collision-free pivoting motion planning is established taking into
account the naturalness of motion as nonholonomic constraints. Finally, we present
a whole-body motion generation method by a humanoid robot, which is verified by
experiments.
C1 [Yoshida, Eiichi] Natl Inst Adv Ind Sci & Technol, CRT, CNRS AIST JRL, UMI3218,
Tsukuba, Ibaraki 3058568, Japan.
[Poirier, Mathieu; Laumond, Jean-Paul; Kanoun, Oussama; Lamiraux, Florent;
Alami, Rachid] Univ Toulouse, CNRS, LAAS, F-31077 Toulouse, France.
[Poirier, Mathieu; Laumond, Jean-Paul; Kanoun, Oussama; Lamiraux, Florent;
Alami, Rachid] ISAE, INP, INSA, UPS, F-31077 Toulouse, France.
[Yokoi, Kazuhito] Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res Inst,
Humanoid Res Grp, Tsukuba, Ibaraki 3058568, Japan.
RP Yoshida, E (reprint author), Natl Inst Adv Ind Sci & Technol, CRT, CNRS AIST
JRL, UMI3218, 1-1-1 Umezono, Tsukuba, Ibaraki 3058568, Japan.
EM e.yoshida@aist.go.jp; mpoirier@laas.fr; jpl@laas.fr; okanoun@laas.fr;
florent@laas.fr; rachid@laas.fr; kazuhito.yokoi@aist.go.jp
RI Yoshida, Eiichi/M-3756-2016; Yokoi, Kazuhito/K-2046-2012
OI Yoshida, Eiichi/0000-0002-3077-6964; Yokoi, Kazuhito/0000-0003-3942-2027
FU Japan Society for the Promotion of Science (JSPS) [18300070]
FX This research was partially supported by Japan Society for the Promotion
of Science (JSPS) Grant-in-Aid for Scientific Research (B), 18300070,
2006.
CR AIYAMA Y, 1993, P IEEE RSJ INT C INT, P136
Arechavaleta G, 2008, IEEE T ROBOT, V24, P5, DOI 10.1109/TRO.2008.915449
Arisumi H, 2007, IEEE INT CONF ROBOT, P2661, DOI 10.1109/ROBOT.2007.363867
Baerlocher P, 2004, VISUAL COMPUT, V20, P402, DOI 10.1007/s00371-004-0244-4
Bicchi A, 2004, IEEE T AUTOMAT CONTR, V49, P710, DOI 10.1109/TAC.2004.826727
Brock O, 2008, HDB ROBOTICS, P615
Choset H., 2006, PRINCIPLES ROBOT MOT
HARADA H, 2005, P 2005 IEEE INT C RO, P1712
HARADA H, 2004, P IEEE INT C ROB AUT, P616
Hsu D., 1999, Proceedings of the 1999 IEEE International Symposium on Assembly
and Task Planning (ISATP'99) (Cat. No.99TH8470), P280, DOI
10.1109/ISATP.1999.782972
HWANG Y, 2003, P IEEE RSJ INT C INT, P1901
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
Laumond JP, 2006, IEEE ROBOT AUTOM MAG, V13, P90, DOI 10.1109/MRA.2006.1638020
LAUMONDED JP, 1998, LECT NOTES CONTROL I, V229
LaValle S. M., 2006, PLANNING ALGORITHM
MAEDA Y, 2004, P 2004 IEEE INT C RO, P2951
Nakamura Y, 1991, ADV ROBOTICS REDUNDA
OKADA K, 2004, P 2004 IEEE RSJ INT, P1174
REEDS JA, 1990, PAC J MATH, V145, P367, DOI 10.2140/pjm.1990.145.367
Siciliano B., 1991, P IEEE INT C ADV ROB, P1211
Simeon T, 2000, ADV ROBOTICS, V14, P477, DOI 10.1163/156855300741960
Soueres P, 1996, IEEE T AUTOMAT CONTR, V41, P672, DOI 10.1109/9.489204
STILMAN M, 2006, P IEEE RSJ INT C INT, P820
Sugihara T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1404, DOI 10.1109/ROBOT.2002.1014740
SUSSMANN H, 1982, PROGR MATH, V27, P1
TAKUBO T, 2004, P IEEE RSJ INT C INT, P1180
YOSHIDA E, 2006, P IEEE RSJ INT C INT, P827
YOSHIDA E, 2009, P 2009 IEEE INT C RO, P2467
YOSHIDA E, 2006, J APPL BIONICS BIOME, V3, P227
Yoshida E, 2008, IEEE T ROBOT, V24, P1186, DOI 10.1109/TRO.2008.2002312
NR 32
TC 16
Z9 16
U1 0
U2 2
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 0929-5593
EI 1573-7527
J9 AUTON ROBOT
JI Auton. Robot.
PD JAN
PY 2010
VL 28
IS 1
BP 77
EP 88
DI 10.1007/s10514-009-9143-x
PG 12
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA 534MP
UT WOS:000272902100006
DA 2018-01-22
ER

PT J
AU Fukushima, M
Masuda, R
AF Fukushima, Masanori
Masuda, Ryosuke
TI Stimulus Distinction in the Skin of a Robot Using Tactile and Shock
Sensors
SO ELECTRONICS AND COMMUNICATIONS IN JAPAN
LA English
DT Article
DE skin sensor; tactile matrix; shock sensor; distinction of stimulus by
free will
AB Recent developments in the areas of mobile and humanoid robots have resulted in
a marked increase in the potential applications of robotics. Since it is likely
that robots will cohabitate and interact with humans in a variety of situations,
they will need an external skin capable of distinguishing between various stimuli.
The present study investigates the development of robot-skin sensors capable of
recognizing four stimuli that are frequently encountered in everyday life. A
prototype was designed using pressure-sensitive rubber and two types of shock
sensors, and results indicate that the four stimuli can be distinguished based on
differences in pressure and acceleration between stimuli. We describe the
development of the skin sensory system and the mechanism employed to distinguish
between the different stimuli, and present the results of experiments conducted
using the developed system. (C) 2009 Wiley Periodicals, Inc. Electron Comm Jpn,
93(1): 50-57, 2010; Published online in Wiley InterScience
(www.interscience.wiley.com). DOI 10.1002/ecj.10103
C1 [Fukushima, Masanori; Masuda, Ryosuke] Tokai Univ, Hiratsuka, Kanagawa 25912,
Japan.
RP Fukushima, M (reprint author), Tokai Univ, Hiratsuka, Kanagawa 25912, Japan.
CR FUKUSHIMA M, 2005, RECOGNITION STIMULI
ICHINOSE N, NEW TECHNOLOGY PIEZO
KUWAHARA M, 1968, LECT INFORM SCI
MAENO T, 2002, IEEJ T SENSORS MIC E, V22, P469
MIYASHITA T, 2005, 12 INT S ROB RES
Shibata T., 2006, J ROBOTICS SOC JAPAN, V24, P319
Shinoda H., 2000, J ROBOTICS SOC JAPAN, V18, P767
Zhang H, 2000, IEEE T ROBOTIC AUTOM, V16, P482, DOI 10.1109/70.880799
NR 8
TC 0
Z9 0
U1 0
U2 2
PU SCRIPTA TECHNICA-JOHN WILEY & SONS
PI NEW YORK
PA 605 THIRD AVE, NEW YORK, NY 10158 USA
SN 1942-9533
J9 ELECTR COMMUN JPN
JI Electr. Commun. Jpn.
PD JAN
PY 2010
VL 93
IS 1
BP 50
EP 57
DI 10.1002/ecj.10103
PG 8
WC Engineering, Electrical & Electronic
SC Engineering
GA 543TS
UT WOS:000273602100006
DA 2018-01-22
ER

PT J
AU Zhang, Z
I, K
Miyake, T
Imamura, T
Horihata, S
AF Zhang, Zhong
I, Kazuaki
Miyake, Tetsuo
Imamura, Takashi
Horihata, Satoshi
TI THREE-DIMENSION SOUND LOCALIZATION BY BINAURAL MODEL USING SELF
ORGANIZING MAP
SO INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL
LA English
DT Article
DE Sound localization; Self organizing map; Binaural model; Human
interface; Humanoid robots
AB It is well known that there are many advantages in using two microphones in
sound localization systems when applied to assist the hearing impaired and in
humanoid,robots. In this paper, we propose a novel sound localization method based
on the binaural model, in which the feature differences of observed signals
obtained from two microphones are used. In our method, first the feature vectors
corresponding to the ratio of the observed signature power spectrum between both
ears and the difference between the arrival times of the sound in. both cars are
learned based on the algorithm of Self Organizing Maps (SOM) and then the SOM of
the 3-D sound source direction is created.. The characteristic vector of the sound
source is input into the SOM and the node that is the nearest reference vector to
the input characteristic vector is estimated by searching for the winning node in
the map's reference vector space, and the sound source direction corresponding to
the winning node is output as the estimated sound source direction. Sounds from
five kinds of objects from many directions were used to experimentally confirm the
effectiveness of the method and a correct answer rate of 98.7% was obtained.
C1 [Zhang, Zhong; I, Kazuaki; Miyake, Tetsuo; Imamura, Takashi] Toyohashi Univ
Technol, Instrumentat Syst Lab, Toyohashi, Aichi 4418580, Japan.
[Horihata, Satoshi] Nihon Univ, Phys Lab, Sch Dent Matsudo, Chiba 2718587,
Japan.
RP Zhang, Z (reprint author), Toyohashi Univ Technol, Instrumentat Syst Lab,
Toyohashi, Aichi 4418580, Japan.
EM zhang@is.pse.tut.ac.jp; miyake@is.pse.tut.ac.jp; ima@is.pse.tut.ac.jp;
horihata.satoshi@nihon-u.ac.jp
CR Allen R. L., 2004, SIGNAL ANAL TIME FRE
Benitez-Perez H, 2007, INT J INNOV COMPUT I, V3, P257
Bodden M., 1993, Acta Acustica, V1, P43
HONGO S, 2006, EA200647 IEICE, P31
HORIHATA S, 2006, T JAPAN SOC MECH E C, V72, P3567
Horio K, 2007, INT J INNOV COMPUT I, V3, P789
HYUGA T, 2002, J SOC FUZZY, V14, P74
Kohonen T., 1995, SELF ORG MAPS
LINDEMANN W, 1986, J ACOUST SOC AM, V80, P1608, DOI 10.1121/1.394325
NAGATA K, 2007, EA200774 IEICE, P7
Nakashima H., 2003, Acoustical Science and Technology, V24, P172, DOI
10.1250/ast.24.172
NAKASHIMA H, 2004, P 5 SICE SYST INT AN, P1102
Palmer A. R., 2002, Acoustical Science and Technology, V23, P61, DOI
10.1250/ast.23.61
Sasaki K., 1998, Transactions of the Society of Instrument and Control
Engineers, V34, P1329
TAKADA T, 2006, EA200682 IEICE, P7
TAKASHIMA K, 2004, JAPAN SOC MECH ENG, V107, P964
TOKUTAKA H, 1999, APPL SELF ORG MAPS
Xia MH, 2006, INT J INNOV COMPUT I, V2, P1391
ZHANG Z, 2008, T JSME C, V74, P642
NR 19
TC 2
Z9 2
U1 0
U2 1
PU ICIC INT
PI KUMAMOTO
PA TOKAI UNIV, 9-1-1, TOROKU, KUMAMOTO, 862-8652, JAPAN
SN 1349-4198
J9 INT J INNOV COMPUT I
JI Int. J. Innov. Comp. Inf. Control
PD JAN
PY 2010
VL 6
IS 1
SI SI
BP 361
EP 371
PG 11
WC Computer Science, Artificial Intelligence
SC Computer Science
GA 544IN
UT WOS:000273648900031
DA 2018-01-22
ER

PT J
AU Lee, D
Nakamura, Y
AF Lee, Dongheui
Nakamura, Yoshihiko
TI Mimesis Model from Partial Observations for a Humanoid Robot
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE mimesis model; imitation; partial observation; proto-symbol
ID HIDDEN MARKOV MODEL; PREMOTOR CORTEX; RECOGNITION; IMITATION;
PERFORMANCE; GENERATION; SYSTEM
AB This paper proposes a new mimesis scheme for partial observations, consisting of
two strategies; (1) motion understanding from partial observations and (2) proto-
symbol-based motion duplication. With the proposed method, whole-body motion
imitation is possible even when observing partial motion data. The scheme enables a
humanoid robot to imitate a new observed motion by utilizing its own prior
knowledge, without learning the demonstrated motion. Evaluation factors, such as
inheritance coordinate and matching error, are introduced to evaluate imitation
performance. The feasibility of the proposed scheme is demonstrated by an
evaluation for a 20-degree-of-freedom humanoid robot.
C1 [Lee, Dongheui; Nakamura, Yoshihiko] Univ Tokyo, Dept Mechanoinformat, Bunkyo
Ku, Tokyo 1130033, Japan.
RP Lee, D (reprint author), Univ Tokyo, Dept Mechanoinformat, Bunkyo Ku, 7-3-1
Hongo, Tokyo 1130033, Japan.
EM dhlee@ynl.t.u-tokyo.ac.jp; nakamura@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102
CR AGARWAL A, 2004, INT C COMP VIS PATT, P882
ASADA H, 1989, IEEE T ROBOTIC AUTOM, V5, P166, DOI 10.1109/70.88037
Bakker P., 1996, AISB96 WORKSH LEARN, P3
Bentivegna D.C., 2000, IEEE RAS INT C HUM R
Bentivegna D.C., 2006, IEEE RSJ INT C INT R, P4994
Billard A, 2001, ROBOT AUTON SYST, V37, P145, DOI 10.1016/S0921-8890(01)00155-5
BILLARD A, 2000, WORKSH INT ROB ENT
BLIMES JA, 1997, ICSITR97021 U BERK
Breazeal C, 2002, TRENDS COGN SCI, V6, P481, DOI 10.1016/S1364-6613(02)02016-8
Chai JX, 2005, ACM T GRAPHIC, V24, P686, DOI 10.1145/1073204.1073248
CHOMAT O, 2000, EUR C COMP VIS, P487
COHEN I, 2002, INT C INF PROC
Davis JW, 1997, PROC CVPR IEEE, P928, DOI 10.1109/CVPR.1997.609439
Deacon T. W., 1997, SYMBOLIC SPECIES COE
Donald M., 1991, ORIGINS MODERN MIND
FOD A, 2000, IEEE RAS INT C HUM R
Gallese V, 1996, BRAIN, V119, P593, DOI 10.1093/brain/119.2.593
HAYES G, 1994, INT S INT ROB SYST G, P198
Hovland GE, 1996, IEEE INT CONF ROBOT, P2706, DOI 10.1109/ROBOT.1996.506571
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Inamura T., 2003, IEEE RSJ INT C HUM R, p1b
Janus B., 2005, 12 IEEE INT C ADV RO, P411
KANG SB, 1995, IEEE 5 INT C COMP VI, P1093
Kulic D, 2008, IEEE INT CONF ROBOT, P2591, DOI 10.1109/ROBOT.2008.4543603
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
KUNIYOSHI Y, 1994, TR94001 RWC
KURIHARA K, 2002, P IEEE INT C ROB AUT, V2, P1241
Lee D., 2009, IEEE INT C ROB AUT, P1535
Lee D., 2007, IEEE RSJ INT C INT R, P617
LEE D, 2008, IEEE RSJ INT C INT R, P2867
Liu G., 2006, P 2006 S INT 3D GRAP, P35
Liu GD, 2006, VISUAL COMPUT, V22, P721, DOI 10.1007/s00371-006-0080-9
Mataric MJ, 2000, IEEE INTELL SYST APP, V15, P18, DOI 10.1109/5254.867908
Miklosi A, 1999, BIOL REV, V74, P347, DOI 10.1017/S000632319900537X
MORI T, 2004, IEEE INT C AUT FAC G, pTP1
Nicolescu MN, 2001, IEEE T SYST MAN CY A, V31, P419, DOI 10.1109/3468.952716
Ott Ch., 2008, IEEE RAS INT C HUM R, P399
RABINER LR, 1989, P IEEE, V77, P257, DOI 10.1109/5.18626
RAMANAN D, 2005, IEEE C COMP VIS PATT, P271
Rizzolatti G, 2004, ANNU REV NEUROSCI, V27, P169, DOI
10.1146/annurev.neuro.27.070203.144230
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
Rosales R., 2000, CVPR, P721
Samejima K., 2002, Transactions of the Institute of Electronics, Information and
Communication Engineers D-II, VJ85D-II, P90
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Schaal S, 2003, PHILOS T R SOC B, V358, P537, DOI 10.1098/rstb.2002.1258
Schaal S, 1997, ADV NEUR IN, V9, P1040
Shon AP, 2007, IEEE INT CONF ROBOT, P2847, DOI 10.1109/ROBOT.2007.363903
STONE M, 1974, J R STAT SOC B, V36, P111
TAKANO W, 2005, 19 ANN C JAP SOC ART
Takano W., 2005, 12 IEEE INT C ADV RO, P424
Takano W, 2006, IEEE INT CONF ROBOT, P3602, DOI 10.1109/ROBOT.2006.1642252
TAYLOR CJ, 2000, COMPUTER VISION IMAG, V80, P677
Tokuda K., 2003, EUR C SPEECH COMM TE, P1
UMILTA MA, 2001, NEURON, V32, P91
Yamane K, 2003, IEEE T ROBOTIC AUTOM, V19, P421, DOI 10.1109/TRA.2003.810579
YAMANE K, 2004, SPRINGER TRACTS ADV, V9
Yamato J, 1992, IEEE C COMP VIS PATT, P379
Yang J, 1997, IEEE T SYST MAN CY A, V27, P34, DOI 10.1109/3468.553220
NR 58
TC 19
Z9 19
U1 1
U2 4
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD JAN
PY 2010
VL 29
IS 1
BP 60
EP 80
DI 10.1177/0278364909342282
PG 21
WC Robotics
SC Robotics
GA 543LK
UT WOS:000273578800004
DA 2018-01-22
ER

PT J
AU Sugimoto, N
Morimoto, J
Hyon, SH
Kawato, M
AF Sugimoto, Norikazu
Morimoto, Jun
Hyon, Sang-Ho
Kawato, Mitsuo
TI eMOSAIC model for humanoid robot control
SO NEUROSCIENCE RESEARCH
LA English
DT Meeting Abstract
C1 [Sugimoto, Norikazu] NICT BioICT Grp, Kyoto, Japan.
[Sugimoto, Norikazu; Morimoto, Jun; Hyon, Sang-Ho; Kawato, Mitsuo] ATR CNS,
Kyoto, Japan.
[Hyon, Sang-Ho] Ritsumeikan Univ, Dept Robot, Tokyo, Japan.
NR 0
TC 0
Z9 0
U1 0
U2 0
PU ELSEVIER IRELAND LTD
PI CLARE
PA ELSEVIER HOUSE, BROOKVALE PLAZA, EAST PARK SHANNON, CO, CLARE, 00000,
IRELAND
SN 0168-0102
J9 NEUROSCI RES
JI Neurosci. Res.
PY 2010
VL 68
SU 1
BP E329
EP E330
DI 10.1016/j.neures.2010.07.1461
PG 2
WC Neurosciences
SC Neurosciences & Neurology
GA V24XS
UT WOS:000208443702227
DA 2018-01-22
ER

PT J
AU Miura, N
Sugiura, M
Takahashi, M
Sassa, Y
Miyamoto, A
Sato, S
Horie, K
Nakamura, K
Kawashima, R
AF Miura, Naoki
Sugiura, Motoaki
Takahashi, Makoto
Sassa, Yuko
Miyamoto, Atsushi
Sato, Shigeru
Horie, Kaoru
Nakamura, Katsuki
Kawashima, Ryuta
TI Effect of motion smoothness on brain activity while observing a dance:
An fMRI study using a humanoid robot
SO SOCIAL NEUROSCIENCE
LA English
DT Article
DE Functional magnetic resonance imaging (fMRI); Motion smoothness;
Intersubject variability; Dance; Parieto-frontal network
ID MIRROR NEURON SYSTEM; HUMAN EXTRASTRIATE CORTEX; SUPERIOR TEMPORAL
SULCUS; HUMAN VISUAL-CORTEX; BIOLOGICAL MOTION; GRASP REPRESENTATIONS;
FACIAL EXPRESSIONS; FUSIFORM GYRUS; MOTOR ACTIONS; HUMAN-BODY
AB Motion smoothness is critical in transmitting implicit information of body
action, such as aesthetic qualities in dance performances. We expected that the
perception of motion smoothness would be characterized by great intersubject
variability deriving from differences in personal backgrounds and attitudes toward
expressive body actions. We used functional magnetic resonance imaging and a
humanoid robot to investigate the effects of the motion smoothness of expressive
body actions and the intersubject variability due to personal attitudes on
perceptions during dance observation. The effect of motion smoothness was analyzed
by both conventional subtraction analysis and functional connectivity analyses that
detect cortical networks reflecting intersubject variability. The results showed
that the cortical networks of motion- and body-sensitive visual areas showed
increases in activity in areas corresponding with motion smoothness, but the
intersubject variability of personal attitudes toward art did not influence these
active areas. In contrast, activation of cortical networks, including the parieto-
frontal network, has large intersubject variability, and this variability is
associated with personal attitudes about the consciousness of art. Thus, our
results suggest that activity in the cortical network involved in understanding
action is influenced by personal attitudes about the consciousness of art during
observations of expressive body actions.
C1 [Miura, Naoki] Kochi Univ Technol, Dept Intelligent Mech Syst Engn, Kochi
7828502, Japan.
[Miura, Naoki; Sugiura, Motoaki; Takahashi, Makoto; Sassa, Yuko; Sato, Shigeru;
Horie, Kaoru; Kawashima, Ryuta] Tohoku Univ, Sendai, Miyagi 980, Japan.
[Miura, Naoki; Nakamura, Katsuki] Japan Sci & Technol Agcy, Saitama, Japan.
[Miyamoto, Atsushi] Sony Corp, Tokyo, Japan.
[Nakamura, Katsuki] Natl Ctr Neurol & Psychiat, Tokyo, Japan.
RP Miura, N (reprint author), Kochi Univ Technol, Dept Intelligent Mech Syst Engn,
185 Miyanokuchi,Tosayamada Cho, Kochi 7828502, Japan.
EM miura.naoki@kochi-tech.ac.jp
FU JST/RISTEX
FX This study was performed at Department of Functional Brain Imaging,
IDAC, Tohoku University, Japan. We gratefully thank Yoshihiro Kuroki and
Tomohisa Moridaira for their contribution to preparation of the
experiment and helpful discussion. This research was supported by
JST/RISTEX, R&D promotion scheme for regional proposals promoted by TAO,
JST/CREST, and Research Center for Language, Brain and Cognition,
Graduate School of International Cultural Studies, Tohoku University.
CR Astafiev SV, 2004, NAT NEUROSCI, V7, P542, DOI 10.1038/nn1241
Aziz-Zadeh L, 2006, J NEUROSCI, V26, P2964, DOI 10.1523/JNEUROSCI.2921-05.2006
Binkofski F, 2000, HUM BRAIN MAPP, V11, P273, DOI 10.1002/1097-
0193(200012)11:4<273::AID-HBM40>3.0.CO;2-0
Bonda E, 1996, J NEUROSCI, V16, P3737
Brown S, 2006, CEREB CORTEX, V16, P1157, DOI 10.1093/cercor/bhj057
Buccino G, 2004, BRAIN LANG, V89, P370, DOI 10.1016/S0093-934X(03)00356-0
Buccino G, 2004, J COGNITIVE NEUROSCI, V16, P114, DOI 10.1162/089892904322755601
Buccino G, 2001, EUR J NEUROSCI, V13, P400, DOI 10.1046/j.1460-9568.2001.01385.x
Calvo-Merino B, 2008, CONSCIOUS COGN, V17, P911, DOI
10.1016/j.concog.2007.11.003
Calvo-Merino B, 2005, CEREB CORTEX, V15, P1243, DOI 10.1093/cercor/bhi007
Castelli F, 2000, NEUROIMAGE, V12, P314, DOI 10.1006/nimg.2000.0612
Cela-Conde CJ, 2004, P NATL ACAD SCI USA, V101, P6321, DOI
10.1073/pnas.0401427101
Chaminade T, 2007, SOC COGN AFFECT NEUR, V2, P206, DOI 10.1093/scan/nsm017
Cross ES, 2006, NEUROIMAGE, V31, P1257, DOI 10.1016/j.neuroimage.2006.01.033
de Gelder B, 2006, NAT REV NEUROSCI, V7, P242, DOI 10.1038/nrn1872
Dhamala M, 2003, NEUROIMAGE, V20, P918, DOI 10.1016/S1053-8119(03)00304-5
DIPELLEGRINO G, 1992, EXP BRAIN RES, V91, P176, DOI 10.1007/BF00230027
Downing PE, 2007, J NEUROSCI, V27, P226, DOI 10.1523/JNEUROSCI.3619-06.2007
Downing PE, 2001, SCIENCE, V293, P2470, DOI 10.1126/science.1063414
Ehrsson HH, 2003, J NEUROPHYSIOL, V90, P3304, DOI 10.1152/jn.01113.2002
Grafton ST, 1996, EXP BRAIN RES, V112, P103
Grill-Spector K., 2003, CURR OPIN NEUROBIOL, V13, P1
Grossman ED, 2002, NEURON, V35, P1167, DOI 10.1016/S0896-6273(02)00897-8
GROSSMAN ED, 2000, J COGNITIVE NEUROSCI, V15, P553
HOLMES A, 1998, NEUROIMAGE, V7, P754
Iacoboni M, 2005, PLOS BIOL, V3, P529, DOI 10.1371/journal.pbio.0030079
Iacoboni M, 2001, P NATL ACAD SCI USA, V98, P13995, DOI 10.1073/pnas.241474598
Iacoboni M, 2007, ANN NEUROL, V62, P213, DOI 10.1002/ana.21198
Kanwisher N, 1997, J NEUROSCI, V17, P4302
Kawabata H, 2004, J NEUROPHYSIOL, V91, P1699, DOI 10.1152/jn.00696.2003
KUROKI Y, 2002, P 20 ANN C RSJ
LaBar KS, 2006, NAT REV NEUROSCI, V7, P54, DOI 10.1038/nrn1825
Lotze M, 2006, NEUROPSYCHOLOGIA, V44, P1787, DOI
10.1016/j.neuropsychologia.2006.03.016
Michels L, 2005, NEUROREPORT, V16, P1037, DOI 10.1097/00001756-200507130-00002
OLDFIELD R, 1971, NEUROPSYCHOLOGIA, V9, P812
Peelen MV, 2005, J NEUROPHYSIOL, V93, P603, DOI 10.1152/jn.00513.2004
Perani D, 2001, NEUROIMAGE, V14, P749, DOI 10.1006/nimg.2001.0872
Peuskens H, 2005, EUR J NEUROSCI, V21, P2864, DOI 10.1111/j.1460-
9568.2005.04106.x
Posamentier MT, 2003, NEUROPSYCHOL REV, V13, P113, DOI 10.1023/A:1025519712569
Puce A, 1996, J NEUROSCI, V16, P5205
Rizzolatti G, 2001, NAT REV NEUROSCI, V2, P661, DOI 10.1038/35090060
Rizzolatti G, 1996, EXP BRAIN RES, V111, P246
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
Saxe R, 2004, NEUROPSYCHOLOGIA, V42, P1435, DOI
10.1016/j.neuropsychologia.2004.04.015
Schwarzlose RF, 2005, J NEUROSCI, V25, P11055, DOI 10.1523/JNEUROSCI.2621-
05.2005
Spiridon M, 2006, HUM BRAIN MAPP, V27, P77, DOI 10.1002/hbm.20169
Sprengelmeyer R, 1998, P ROY SOC B-BIOL SCI, V265, P1927, DOI
10.1098/rspb.1998.0522
Sugiura M, 2007, HUM BRAIN MAPP, V28, P49, DOI 10.1002/hbm.20256
Szameitat AJ, 2007, NEUROIMAGE, V34, P702, DOI 10.1016/j.neuroimage.2006.09.033
Thompson JC, 2005, J NEUROSCI, V25, P9059, DOI 10.1523/JNEUROSCI.2129-05.2005
Tootell RBH, 1997, J NEUROSCI, V17, P7060
Wei XC, 2004, NEUROIMAGE, V21, P1000, DOI 10.1016/j.neuroimage.2003.10.039
Zacks JM, 2001, NAT NEUROSCI, V4, P651, DOI 10.1038/88486
NR 53
TC 18
Z9 18
U1 0
U2 7
PU PSYCHOLOGY PRESS
PI HOVE
PA 27 CHURCH RD, HOVE BN3 2FA, EAST SUSSEX, ENGLAND
SN 1747-0919
J9 SOC NEUROSCI-UK
JI Soc. Neurosci.
PY 2010
VL 5
IS 1
BP 40
EP 58
AR PII 912970478
DI 10.1080/17470910903083256
PG 19
WC Neurosciences; Psychology
SC Neurosciences & Neurology; Psychology
GA 597HZ
UT WOS:000277748800004
PM 19585386
DA 2018-01-22
ER

PT J
AU Kulic, D
Nakamura, Y
AF Kulic, Dana
Nakamura, Yoshihiko
TI Incremental Learning and Memory Consolidation of Whole Body Human Motion
Primitives
SO ADAPTIVE BEHAVIOR
LA English
DT Article
DE incremental learning; motion primitives; whole body motions; humanoid
robots
ID GOAL-DIRECTED IMITATION; MIRROR-NEURON SYSTEM; HIDDEN MARKOV-MODELS;
ACTION RECOGNITION; HUMAN-PERFORMANCE; MENTAL-IMAGERY; MOTOR MEMORY;
ROBOT; PERSPECTIVE; TASK
AB The ability to learn during continuous and on-line observation would be
advantageous for humanoid robots, as it would enable them to learn during co-
location and interaction in the human environment. However, when motions are being
learned and clustered on-line, there is a trade-off between classification accuracy
and the number of training examples, resulting in potential misclassifications both
at the motion and hierarchy formation level. This article presents an approach
enabling fast on-line incremental learning, combined with an incremental memory
consolidation process correcting initial misclassifications and errors in
organization, to improve the stability and accuracy of the learned motions,
analogous to the memory consolidation process following motor learning observed in
humans. Following initial organization, motions are randomly selected for
reclassification, at both low and high levels of the hierarchy. If a better
reclassification is found, the knowledge structure is reorganized to comply. The
approach is validated during incremental acquisition of a motion database
containing a variety of full body motions.(1)
C1 [Kulic, Dana] Univ Waterloo, Dept Elect & Comp Engn, Waterloo, ON N2L 3G1,
Canada.
[Nakamura, Yoshihiko] Univ Tokyo, Dept Mechano Informat, Bunkyo Ku, Tokyo
1138656, Japan.
RP Kulic, D (reprint author), Univ Waterloo, Dept Elect & Comp Engn, 200 Univ Ave
W, Waterloo, ON N2L 3G1, Canada.
EM dkulic@ece.uwaterloo.ca
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102; Kulic, Dana/0000-0002-4169-2141
FU Japanese Society for the Promotion of Science [18.06754]; Category S
Grant-in-Aid for Scientific Research [20220001]
FX This work is supported by the Japanese Society for the Promotion of
Science grant 18.06754 and Category S Grant-in-Aid for Scientific
Research 20220001.
CR AGGARWAL CC, 2003, P 29 INT C VER LARG, P81, DOI DOI 10.1016/B978-012722442-
8/50016-1
Alissandrakis A, 2002, IEEE T SYST MAN CY A, V32, P482, DOI
10.1109/TSMCA.2002.804820
ALISSANDRAKIS A, 2002, P 5 GERM WORKSH ART, P143
Alissandrakis A, 2007, IEEE T SYST MAN CY B, V37, P299, DOI
10.1109/TSMCB.2006.886947
Bekkering H, 2004, Q J EXP PSYCHOL-A, V57, P1345, DOI 10.1080/02724980343000765
Bennewitz M, 2005, INT J ROBOT RES, V24, P31, DOI 10.1177/0278364904048962
Bentivegna D. C., 2006, P 2006 IEEE RSJ INT, P2677
Bernardin K, 2005, IEEE T ROBOT, V21, P47, DOI 10.1109/TRO.2004.833816
Billard AG, 2006, ROBOT AUTON SYST, V54, P370, DOI 10.1016/j.robot.2006.01.007
Bille P, 2005, THEOR COMPUT SCI, V337, P217, DOI 10.1016/j.tcs.2004.12.030
Breazeal C, 2006, ROBOT AUTON SYST, V54, P385, DOI 10.1016/j.robot.2006.02.004
Breazeal C, 2002, TRENDS COGN SCI, V6, P481, DOI 10.1016/S1364-6613(02)02016-8
Byrne Richard W., 1999, Animal Cognition, V2, P63, DOI 10.1007/s100710050025
Byrne RW, 1998, BEHAV BRAIN SCI, V21, P667, DOI 10.1017/S0140525X98001745
Calinon Sylvain, 2007, P ACM IEEE INT C HUM, P255, DOI DOI
10.1145/1228716.1228751
Calinon S., 2007, P IEEE INT C ROB HUM, P702
Calinon S, 2007, IMITATION AND SOCIAL LEARNING IN ROBOTS, HUMANS AND ANIMALS:
BEHAVIOURAL, SOCIAL AND COMMUNICATIVE DIMENSIONS, P153, DOI
10.1017/CBO9780511489808.012
Calinon S, 2007, IEEE T SYST MAN CY B, V37, P286, DOI 10.1109/TSMCB.2006.886952
Chaminade T, 2002, NEUROIMAGE, V15, P318, DOI 10.1006/nimg.2001.0981
DEHAENE S, 1987, P NATL ACAD SCI USA, V84, P2727, DOI 10.1073/pnas.84.9.2727
DEMIRIS Y, 2006, J ROBOTICS AUTONOMOU, V54, P361
Diekelmann S, 2007, NAT NEUROSCI, V10, P1085, DOI 10.1038/nn0907-1085
Dillmann R, 2004, ROBOT AUTON SYST, V47, P109, DOI 10.1016/j.robot.2004.03.005
Dillmann R., 1999, P INT S ROB RES, P229
Dominey P. F., 2008, P IEEE INT C HUM ROB, P693
DRISKELL JE, 1994, J APPL PSYCHOL, V79, P481, DOI 10.1037/0021-9010.79.4.481
Ekvall S, 2006, IEEE T ROBOT, V22, P1029, DOI 10.1109/TRO.2006.878976
Erlhagen W, 2006, ROBOT AUTON SYST, V54, P353, DOI 10.1016/j.robot.2006.01.004
ERLHAGEN W, 2007, P 6 IEEE INT C DEV L, P140
Fisher D. H., 1987, Machine Learning, V2, P139, DOI 10.1023/A:1022852608280
Gallese V, 1996, BRAIN, V119, P593, DOI 10.1093/brain/119.2.593
Hayes SJ, 2008, ACTA PSYCHOL, V127, P407, DOI 10.1016/j.actpsy.2007.07.009
Heyes C, 2001, TRENDS COGN SCI, V5, P253, DOI 10.1016/S1364-6613(00)01661-2
Heyes CM, 2000, ADV STUD BEHAV, V29, P215, DOI 10.1016/S0065-3454(08)60106-0
Ho MAT, 2005, IEEE T ROBOT, V21, P497, DOI 10.1109/TRO.2004.840912
Iba S, 2005, INT J ROBOT RES, V24, P83, DOI 10.1177/0278364904049250
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Jackson PL, 2001, ARCH PHYS MED REHAB, V82, P1133, DOI 10.1053/apmr.2001.24286
Jain AK, 1999, ACM COMPUT SURV, V31, P264, DOI 10.1145/331499.331504
JEANNEROD M, 1995, NEUROPSYCHOLOGIA, V33, P1419, DOI 10.1016/0028-3932(95)00073-
C
JENKINS OC, 2004, P 21 INT C MACH LEAR, P441
Kadone H., 2005, P 2005 IEEE RSJ INT, P2900
KANG SB, 1993, IEEE T ROBOTIC AUTOM, V9, P432, DOI 10.1109/70.246054
Keysers C, 2004, TRENDS COGN SCI, V8, P501, DOI 10.1016/j.tics.2004.09.005
Kragic D, 2005, INT J ROBOT RES, V24, P731, DOI 10.1177/0278364905057059
Krakauer JW, 2006, TRENDS NEUROSCI, V29, P58, DOI 10.1016/j.tins.2005.10.003
Kruger V, 2007, ADV ROBOTICS, V21, P1473
KULIC D, 2008, P IEEE RJS INT C INT, P2860
Kulic D, 2007, P IEEE INT C INT ROB, P2388
KULIC D, 2008, P IEEE INT C HUM ROB, P326
KULIC D, 2008, P INT C EP ROB, P61
KULIC D, 2008, P IEEE INT C ROB AUT, P2591
KULIC D, 2007, P INT C EP ROB, P69
Kulic D, 2008, INT J ROBOT RES, V27, P761, DOI 10.1177/0278364908091153
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
Kuniyoshi Y., 1989, P INT S IND ROB, P119
KURIHARA K, 2002, P IEEE INT C ROB AUT, V2, P1241
LEE D, 2005, P INT C INT ROB SYST, P1911
Liu KP, 2004, ARCH PHYS MED REHAB, V85, P1403, DOI 10.1016/j.apmr.2003.12.035
LIU S, 1992, J DYN SYST-T ASME, V114, P220, DOI 10.1115/1.2896518
Lockerd A., 2004, P IEEE RSJ INT C INT, P3475
MCCLELLAND JL, 1995, PSYCHOL REV, V102, P419, DOI 10.1037//0033-295X.102.3.419
Meltzoff A. N., 2002, BLACKWELL HDB CHILDH, P6, DOI DOI
10.1002/9780470996652.CH1
MELTZOFF AN, 2005, PERSPECTIVES IMITATI, V2, P55
MUSSAIVALDI FA, 1985, J NEUROSCI, V5, P2732
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Nicolescu MN, 2001, IEEE T SYST MAN CY A, V31, P419, DOI 10.1109/3468.952716
NICOLESCU MN, 2005, IMITATION SOCIAL LEA
Ogata T, 2005, ADV ROBOTICS, V19, P651, DOI 10.1163/1568553054255655
Oztop E, 2002, BIOL CYBERN, V87, P116, DOI 10.1007/s00422-002-0318-1
Payne JD, 2004, LEARN MEMORY, V11, P671, DOI 10.1101/lm.77104
PETROVSKAYA A, 2007, P 20 INT JOINT C ART, P2178
RABINER LR, 1989, P IEEE, V77, P257, DOI 10.1109/5.18626
Rizzolatti G, 2001, NAT REV NEUROSCI, V2, P661, DOI 10.1038/35090060
Rizzolatti G, 2004, ANNU REV NEUROSCI, V27, P169, DOI
10.1146/annurev.neuro.27.070203.144230
RODRIGUES PP, 2008, IEEE T KNOW IN PRESS
Rosenbaum DA, 2001, PSYCHOL REV, V108, P709, DOI 10.1037//0033-295X.108.4.709
Rosenbaum DA, 2001, MOTOR CONTROL, V5, P99, DOI 10.1123/mcj.5.2.99
Schaal S, 2003, PHILOS T R SOC B, V358, P537, DOI 10.1098/rstb.2002.1258
Shadmehr R, 1997, SCIENCE, V277, P821, DOI 10.1126/science.277.5327.821
SHASHA D, 1997, PATTERN MATCHING ALG, P341
STARNER T, 1995, P INT WORKSH AUT FAC, P189
Stickgold R, 2005, NATURE, V437, P1272, DOI 10.1038/nature04286
TAKANO W, 2005, P INT C ADV ROB TOR, P424
Takano W, 2006, IEEE INT CONF ROBOT, P3602, DOI 10.1109/ROBOT.2006.1642252
Taylor G. W., 2006, P C NEUR INF PROC SY, P1345
Wohlschlager A, 2003, PHILOS T R SOC B, V358, P501, DOI 10.1098/rstb.2002.1257
Yang J, 1997, IEEE T SYST MAN CY A, V27, P34, DOI 10.1109/3468.553220
YUE G, 1992, J NEUROPHYSIOL, V67, P1114
Zhang T, 1996, P 1996 ACM SIGMOD IN, P103, DOI DOI 10.1145/233269.233324
NR 90
TC 4
Z9 4
U1 0
U2 3
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 1059-7123
EI 1741-2633
J9 ADAPT BEHAV
JI Adapt. Behav.
PD DEC
PY 2009
VL 17
IS 6
BP 484
EP 507
DI 10.1177/1059712309342487
PG 24
WC Computer Science, Artificial Intelligence; Psychology, Experimental;
Social Sciences, Interdisciplinary
SC Computer Science; Psychology; Social Sciences - Other Topics
GA 521TK
UT WOS:000271946800002
DA 2018-01-22
ER

PT J
AU Fujita, M
AF Fujita, Masahiro
TI Intelligence Dynamics: a concept and preliminary experiments for
open-ended learning agents
SO AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS
LA English
DT Article
DE Open-ended; Dynamics; Embodiment; Prediction; Intelligence Dynamics;
Recurrent neural networks; Intrinsic motivation; Flow theory
ID OBJECT HANDLING BEHAVIORS; NEURAL-NETWORK MODEL; GENERATION
AB We propose a novel approach that aims to realize autonomous developmental
intelligence called Intelligence Dynamics. We emphasize two technical features of
dynamics and embodiment in comparison with the symbolic approach of the
conventional Artificial Intelligence. The essential conceptual idea of this
approach is that an embodied agent interacts with the real world to learn and
develop its intelligence as attractors of the dynamic interaction. We develop two
computational models, one is for self-organizing multi-attractors, and the other
provides a motivational system for open-ended learning agents. The former model is
realized by recurrent neural networks with a small humanoid body in the real world,
and the later is realized by hierarchical support vector machines with inverted
pendulum agents in a virtual world. Although they are preliminary experiments, they
take important first steps towards demonstrating the feasibility and value of open-
ended learning agents with the concept of Intelligence Dynamics.
C1 Sony Corp, Syst Technol Labs, Shinagawa Ku, Tokyo, Japan.
RP Fujita, M (reprint author), Sony Corp, Syst Technol Labs, Shinagawa Ku, Tokyo,
Japan.
EM masahirof@jp.sony.com
CR ALLEN GI, 1974, PHYSIOL REV, V54, P957
Asada M, 2001, ROBOT AUTON SYST, V37, P185, DOI 10.1016/S0921-8890(01)00157-9
Barto A., 2004, P 3 INT C DEV LEARN, P112
Bentivegna DC, 2004, ROBOT AUTON SYST, V47, P163, DOI
10.1016/j.robot.2004.03.010
Bentivegna DC, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT
ROBOTS AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2449, DOI 10.1109/IRDS.2002.1041635
Bernstein N, 1967, COORDINATION REGULAT
Borg I.I., 1997, MODERN MULTIDIMENSIO
Charniak E., 1985, INTRO ARTIFICIAL INT
CSIKSZENTMIHALYI M., 1990, FLOW PSYCHOL OPTIMAL
Donald M., 2011, ORIGIN MODERN MIND
Doya K, 2002, NEURAL COMPUT, V14, P1347, DOI 10.1162/089976602753712972
Fujita M, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P960
HARNAD S, 1990, PHYSICA D, V42, P335, DOI 10.1016/0167-2789(90)90087-6
Haruno M, 2001, NEURAL COMPUT, V13, P2201, DOI 10.1162/089976601750541778
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Ito M, 2006, NEURAL NETWORKS, V19, P323, DOI 10.1016/j.neunet.2006.02.007
JORDAN MI, 1992, COGNITIVE SCI, V16, P307, DOI 10.1016/0364-0213(92)90036-T
Kaplan F., 2003, P 3 INT WORKSH EP RO, P73
Kohonen T., 1997, SELF ORG MAPS
Kuniyoshi Y, 2004, ROBOT AUTON SYST, V48, P189, DOI 10.1016/j.robot.2004.07.004
Ma JS, 2003, NEURAL COMPUT, V15, P2683, DOI 10.1162/089976603322385117
MINAMINO K, 2008, INTELLIGENCE DYNAMIC, V3
NEWELL A, 1976, COMMUN ACM, V19, P113, DOI 10.1145/360018.360022
Noda K, 2006, LECT NOTES COMPUT SC, V4095, P185
Pfeifer R., 1999, UNDERSTANDING INTELL
Reed Edward S, 1997, SOUL MIND EMERGENCE
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
Rumelhart D. E., 1986, PARALLEL DISTRIBUTED
RUSSELL R, 2002, ARTIFICIAL INTELLIGE
SABE K, 2005, INTELLIGENCE DYNAMIC, V2
SABE K, 2006, P INT WORKSH INT DYN
Scholkopf Bernhard, 2001, LEARNING KERNELS SUP
Sutton RS, 1998, REINFORCEMENT LEARNI
Tani J, 2003, NEURAL NETWORKS, V16, P11, DOI 10.1016/S0893-6080(02)00214-9
Vijayakumar S, 2000, P 17 INT C MACH LEAR, P1079
NR 35
TC 0
Z9 0
U1 0
U2 4
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1387-2532
EI 1573-7454
J9 AUTON AGENT MULTI-AG
JI Auton. Agents Multi-Agent Syst.
PD DEC
PY 2009
VL 19
IS 3
SI SI
BP 248
EP 271
DI 10.1007/s10458-009-9076-y
PG 24
WC Automation & Control Systems; Computer Science, Artificial Intelligence
SC Automation & Control Systems; Computer Science
GA 501LQ
UT WOS:000270383800002
OA gold
DA 2018-01-22
ER

PT J
AU Michiwaki, Y
Kobayashi, H
AF Michiwaki, Yukihiro
Kobayashi, H.
TI DEVELOPMENT OF HUMANOID ROBOT REPRODUCING A NORMAL SWALLOW
SO DYSPHAGIA
LA English
DT Meeting Abstract
C1 [Michiwaki, Yukihiro] Musashino Red Cross Hosp, Tokyo, Japan.
[Kobayashi, H.] Tokyo Univ Sci, Tokyo 162, Japan.
NR 0
TC 1
Z9 1
U1 0
U2 0
PU SPRINGER
PI NEW YORK
PA 233 SPRING ST, NEW YORK, NY 10013 USA
SN 0179-051X
J9 DYSPHAGIA
JI Dysphagia
PD DEC
PY 2009
VL 24
IS 4
BP 464
EP 464
PG 1
WC Otorhinolaryngology
SC Otorhinolaryngology
GA 534QB
UT WOS:000272911100060
DA 2018-01-22
ER

PT J
AU Sakamoto, H
Katayose, H
Miyazaki, K
Nakatsu, R
AF Sakamoto, Hajime
Katayose, Haruhiro
Miyazaki, Koji
Nakatsu, Ryohei
TI EXTENDED-KNEE WALK FOR HUMANOID ROBOT WITH PARALLEL LINK LEGS
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Walk; humanoid robot; extended-knee walk
ID PASSIVE DYNAMIC WALKING
AB This paper proposes a method of giving humanoid robots a natural humanlike walk,
which we call the extended-knee walk. Unlike the bent-knee walk of most humanoid
robots to date, this walk includes a period in which the knee is fully extended. A
parallel mechanism is used in the legs and a method of calculating the walk
trajectory copes with the difficulty of the singularity in achieving a humanlike
walk. The advantages of this walk were verified from two aspects: good visual
appearance and good energy efficiency. An experiment comparing the trajectories of
the knee angle during walking showed that the walking style produced by the
proposed method is more humanlike than the usual walking style of other humanoid
robots. The energy efficiency was verified through power consumption and motor
temperature measurements and the possibilities for practical use of this method are
discussed with reference to the results of the worldwide soccer competition RoboCup
2008.
C1 [Sakamoto, Hajime; Katayose, Haruhiro] Kwansei Gakuin Univ, Grad Sch Sci &
Technol, Sanda, Hyogo 6691337, Japan.
[Miyazaki, Koji] Kyoto Univ, Acad Ctr Comp & Media Studies, Sakyo Ku, Kyoto
6068501, Japan.
[Nakatsu, Ryohei] Natl Univ Singapore, Interact & Digital Media Inst, Singapore
119613, Singapore.
RP Sakamoto, H (reprint author), Kwansei Gakuin Univ, Grad Sch Sci & Technol, 2-1
Gakuen, Sanda, Hyogo 6691337, Japan.
EM sakamoto@hajimerobot.co.jp; katayose@kwansei.ac.jp;
miyazaki@it.is.konan-u.ac.jp; idmdir@nus.edu.sg
CR Asano F, 2005, IEEE T ROBOT, V21, P754, DOI 10.1109/TRO.2005.847610
BEHNKE S, 2007, IEEE INT C HUM ROB H
Chevallereau C, 2003, IEEE CONTR SYST MAG, V23, P57, DOI
10.1109/MCS.2003.1234651
Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Gienger M, 2001, IEEE INT CONF ROBOT, P4140, DOI 10.1109/ROBOT.2001.933265
HIROSE M, 2001, IEEE INT C INT ROB S
HOBBELEN D, 2008, IEEE RSJ INT C INT R, P2486
HU L, 2007, IEEE INT C HUM ROB H
Kaneko K., 2002, IEEE INT C ROB AUT I, P38
Kinugasa T, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P346
Kurazame R., 2005, IEEE INT C INT ROB S, P337
KUROKI Y, 2003, IEEE INT C INT ROB S, P27
Li Q. H., 1992, IEEE RSJ INT S INT R, P597
MINAKATA H, 2008, IEEE INT WORKSH ADV
Nagasaka K, 2004, IEEE INT CONF ROBOT, P3189, DOI 10.1109/ROBOT.2004.1308745
Nakaoka S, 2004, IEEE INT CONF ROBOT, P610, DOI 10.1109/ROBOT.2004.1307216
Nakaoka S., 2005, IEEE RSJ INT C INT R, P2769
Nishio Shuichi, 2007, HUMANOID ROBOTS NEW
NISHIWAKI K, 2002, IEEE INT C ROB AUT I, P3015
Norman D., 2007, DESIGN FUTURE THINGS
Ogura Y, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P352
PARK IW, 2005, INT J HUM ROBOT, V2, P5159
SAKAMOTO H, 2008, INT S ART LIF ROB AR
SAKAMOTO H, 2007, INT C MECH INF TECHN
TAJIMA R, 2008, 26 ANN C ROB SOC JAP, P201
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
VUKOBRATOVIC M, 2008, INT J HUM ROBOT, V1, P119
Vukobratovic M, 2006, INT J HUM ROBOT, V3, P153, DOI 10.1142/S0219843606000710
Wisse M, 2004, ROBOTICA, V22, P681, DOI 10.1017/S0263574704000475
NR 29
TC 5
Z9 5
U1 0
U2 3
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD DEC
PY 2009
VL 6
IS 4
BP 565
EP 584
DI 10.1142/S0219843609001917
PG 20
WC Robotics
SC Robotics
GA 541OW
UT WOS:000273426800002
DA 2018-01-22
ER

PT J
AU Ohashi, E
Sato, T
Ohnishi, K
AF Ohashi, Eijiro
Sato, Tomoya
Ohnishi, Kouhei
TI A Walking Stabilization Method Based on Environmental Modes on Each Foot
for Biped Robot
SO IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
LA English
DT Article
DE Biped robot; environmental adaptation; environmental mode; Hadamard
matrix; walking; zero moment point (ZMP)
ID HUMANOID ROBOT; MOTION; GAIT; MANIPULATION; FORCE
AB In this paper, a method that makes biped robots track zero-moment-point (ZMP)
reference trajectories is proposed. The biped robots have flat rectangular soles
with force sensors at each corner. The sensors detect reaction force from the
ground. Sensor information is transformed into useful structure, which is defined
as "environmental modes." The environmental modes represent contact states between
the ground and the soles and are extracted by the information of four force sensors
at each corner of the rectangular soles. The environmental modes consist of the
following four modes: 1) heaving; 2) rolling; 3) pitching; and 4) twisting. These
modes are related to the ZMP, by which walking stability is evaluated. Therefore,
command values of the environmental modes can be derived by the ZMP reference
trajectory. By tracking the command values, the walking motion becomes more stable.
The conventional study on controlling the environmental modes takes only the
rolling and pitching modes into account. However, it is not sufficient to stabilize
the walking motion, because the heaving mode has been neglected. The heaving mode
is also very important to stabilize the walking motion. Therefore, in this paper,
we extend the conventional method for considering the heaving mode to track the ZMP
reference trajectory. Validity of the proposed method was confirmed by experimental
results.
C1 [Ohashi, Eijiro; Sato, Tomoya; Ohnishi, Kouhei] Keio Univ, Dept Syst Design
Engn, Ohnishi Lab, Yokohama, Kanagawa 2238522, Japan.
RP Ohashi, E (reprint author), Keio Univ, Dept Syst Design Engn, Ohnishi Lab,
Yokohama, Kanagawa 2238522, Japan.
EM ei2ro@sum.sd.keio.ac.jp; sat@sum.sd.keio.ac.jp; ohnishi@sd.keio.ac.jp
RI Ohnishi, Kouhei/A-6543-2011
CR ERBATUR L, 2006, P 32 ANN C IEEE IND, P4100
Fu CL, 2008, IEEE T IND ELECTRON, V55, P2111, DOI 10.1109/TIE.2008.921205
FURUSHO J, 1990, INT J ROBOT RES, V9, P83, DOI 10.1177/027836499000900207
Harada K., 2004, P IEEE RAS INT C HUM, V2, P640, DOI DOI
10.1109/ICHR.2004.1442676
Harada K, 2007, IEEE-ASME T MECH, V12, P53, DOI 10.1109/TMECH.2006.886254
Harada K, 2006, IEEE T ROBOT, V22, P568, DOI 10.1109/TRO.2006.870649
Hashimoto K, 2007, IEEE INT CONF ROBOT, P1869, DOI 10.1109/ROBOT.2007.363594
Hashimoto K, 2006, IEEE INT CONF ROBOT, P1213, DOI 10.1109/ROBOT.2006.1641874
Hirukawa H, 2007, IEEE INT CONF ROBOT, P2181, DOI 10.1109/ROBOT.2007.363644
Kagami S, 2001, IEEE INT CONF ROBOT, P2431, DOI 10.1109/ROBOT.2001.932986
Kajita S., 2006, P IEEE RSJ INT C INT, P2993
Kuffner JJ, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P500, DOI
10.1109/IROS.2001.973406
KURAZUME R, 2003, P IEEE INT C ROB AUT, P925
Loffler K, 2004, IEEE T IND ELECTRON, V51, P972, DOI 10.1109/TIE.2004.834948
Minakata H, 2008, IEEE T IND ELECTRON, V55, P1271, DOI 10.1109/TIE.2007.911919
MITOBE K, 2004, P IEEE RSJ INT C INT, P2253
MIURA H, 1984, INT J ROBOT RES, V3, P60, DOI 10.1177/027836498400300206
Morisawa M., 2003, P 29 ANN C IEEE IND, P490
Motoi N, 2007, IEEE T IND INFORM, V3, P154, DOI 10.1109/TII.2007.898469
Nishiwaki K., 2002, P IEEE INT C ROB AUT, V3, P3105
NISHIWAKI K, 2007, P IEEE RSJ INT C INT, P4214
Oh Y., 2006, P IEEE INT C IND EL, P4159
Ohashi E, 2008, AMC '08: 10TH INTERNATIONAL WORKSHOP ON ADVANCED MOTION CONTROL,
VOLS 1 AND 2, PROCEEDINGS, P306
Ohashi E, 2007, IEEE T IND ELECTRON, V54, P1632, DOI 10.1109/TIE.2007.894728
Ohnishi K, 1996, IEEE-ASME T MECH, V1, P56, DOI 10.1109/3516.491410
Okumura Y, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P335
SANO S, 2008, P 10 INT WORKSH AMC, P480
SELLAOUTI R, 2006, P IEEE RSJ INT C INT, P4909
Shibuya M., 2006, P 32 ANN C IEEE IND, P4094
SHIEH MY, 2007, P 33 ANN C IEEE IECO, P2778
SUGIHARA T, 2005, P IEEE INT C ROB AUT, P305
SUGIHARA T, 2007, P IEEE C INT ROB SYS, P444
SUZUKI J, 2002, P 6 INT WORKSH ADV M, P69
Suzuki T., 2006, P IEEE INT C IND TEC, P1332, DOI DOI 10.1109/ICIT.2006.372545
Tsuji T., 2002, P 7 INT WORKSH ADV M, P478
VERRELST B, 2006, P IEEE ICMA, P1072
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
WONG CC, 2007, P 33 ANN C IEEE IECO, P451
ZHU C, 2006, P INT C ROB BIOM, P425
NR 39
TC 15
Z9 15
U1 1
U2 7
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 0278-0046
EI 1557-9948
J9 IEEE T IND ELECTRON
JI IEEE Trans. Ind. Electron.
PD OCT
PY 2009
VL 56
IS 10
BP 3964
EP 3974
DI 10.1109/TIE.2009.2024098
PG 11
WC Automation & Control Systems; Engineering, Electrical & Electronic;
Instruments & Instrumentation
SC Automation & Control Systems; Engineering; Instruments & Instrumentation
GA 494TC
UT WOS:000269838700022
DA 2018-01-22
ER

PT J
AU Kulic, D
Takano, W
Nakamura, Y
AF Kulic, Dana
Takano, Wataru
Nakamura, Yoshihiko
TI Online Segmentation and Clustering From Continuous Observation of Whole
Body Motions
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Humanoid robots; incremental learning; learning from observation; motion
segmentation and clustering
ID HIDDEN MARKOV-MODELS; IMITATION; ROBOTS; TASK
AB This paper describes a novel approach for incremental learning of human motion
pattern primitives through online observation of human motion. The observed time
series data stream is first stochastically segmented into potential motion
primitive segments, based on the assumption that data belonging to the same motion
primitive will have the same underlying distribution. The motion segments are then
abstracted into a stochastic model representation and automatically clustered and
organized. As new motion patterns are observed, they are incrementally grouped
together into a tree structure, based on their relative distance in the model
space. The tree leaves, which represent the most specialized learned motion
primitives, are then passed back to the segmentation algorithm so that as the
number of known motion primitives increases, the accuracy of the segmentation can
also be improved. The combined algorithm is tested on a sequence of continuous
human motion data that are obtained through motion capture, and demonstrates the
performance of the proposed approach.
C1 [Kulic, Dana; Takano, Wataru; Nakamura, Yoshihiko] Univ Tokyo, Dept
Mechanoinformat, Tokyo 1138656, Japan.
RP Kulic, D (reprint author), Univ Waterloo, Dept Elect & Comp Engn, Waterloo, ON
N2L 3G1, Canada.
EM dkulic@ece.uwaterloo.ca; takano@ynl.t.u-tokyo.ac.jp;
nakamura@ynl.t.u-tokyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102; Kulic, Dana/0000-0002-4169-2141
FU Japanese Society for the Promotion of Science [18.06754]; Category S
Grant-in-Aid for Scientific Research [15100002]
FX Manuscript received June 4, 2008; revised February 4, 2009. First
published July 28, 2009; current version published October 9, 2009. This
paper was recommended for publication by Associate Editor F. Lamiraux
and Editor J.-P. Laumond upon evaluation of the reviewers' comments.
This work was supported by the Japanese Society for the Promotion of
Science under Grant 18.06754 and by the Category S Grant-in-Aid for
Scientific Research under Grant 15100002. This paper was presented in
part at the 2008 IEEE International Conference on Robotics and
Automation and in part at the 2008 IEEE/Robotics Society of Japan
International Conference on Intelligent Robots and Systems.
CR Asfour T., 2006, P IEEE RAS INT C HUM, P40
Bernardin K, 2005, IEEE T ROBOT, V21, P47, DOI 10.1109/TRO.2004.833816
Billard AG, 2006, ROBOT AUTON SYST, V54, P370, DOI 10.1016/j.robot.2006.01.007
Breazeal C, 2002, TRENDS COGN SCI, V6, P481, DOI 10.1016/S1364-6613(02)02016-8
Calinon S, 2007, IEEE T SYST MAN CY B, V37, P286, DOI 10.1109/TSMCB.2006.886952
Dillmann R., 1999, P INT S ROB RES, P229
Dixon KR, 2004, INT J ROBOT RES, V23, P955, DOI 10.1177/0278364904044401
Fod A, 2002, AUTON ROBOT, V12, P39, DOI 10.1023/A:1013254724861
Ijspeert AJ, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS I-IV, PROCEEDINGS, P1398, DOI 10.1109/ROBOT.2002.1014739
Ilg W., 2004, INT J HUM ROBOT, V1, P613
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Jain AK, 1999, ACM COMPUT SURV, V31, P264, DOI 10.1145/331499.331504
JANUS B, 2006, THESIS U TOKYO TOKYO
Janus B., 2005, P IEEE INT C ADV ROB, P411
Jenkins O. C., 2004, INT J HUM ROBOT, V1, P237
KOENIG N, 2006, INT C DEV LEARN BLOO
Kohlmorgen J, 2002, ADV NEUR IN, V14, P793
KULIC D, 2009, SUPPLEMENTARY ON LIN
KULIC D, 2008, P IEEE RJS INT C INT, P2860
KULIC D, 2008, P IEEE INT C HUM ROB, P326
KULIC D, 2008, P IEEE INT C ROB AUT, P2591
Kulic D, 2008, INT J ROBOT RES, V27, P761, DOI 10.1177/0278364908091153
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
Kuniyoshi Y., 1989, P INT S IND ROB, P119
Lieberman J., 2004, P IEEE RAS INT C HUM, P342
LIU S, 1992, J DYN SYST-T ASME, V114, P220, DOI 10.1115/1.2896518
Okada M, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1410, DOI 10.1109/ROBOT.2002.1014741
POMPLUN M, 2000, P IEEE INT C HUM ROB, P7
RABINER LR, 1989, P IEEE, V77, P257, DOI 10.1109/5.18626
Schaal S, 2003, PHILOS T R SOC B, V358, P537, DOI 10.1098/rstb.2002.1258
TAKANO W, 2006, P IEEE RAS INT C HUM, P425
Taylor G. W., 2006, P C NEUR INF PROC SY, P1345
Yamane K, 2003, IEEE T VIS COMPUT GR, V9, P352, DOI 10.1109/TVCG.2003.1207443
NR 33
TC 55
Z9 55
U1 1
U2 5
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD OCT
PY 2009
VL 25
IS 5
BP 1158
EP 1166
DI 10.1109/TRO.2009.2026508
PG 9
WC Robotics
SC Robotics
GA 503PF
UT WOS:000270549300015
DA 2018-01-22
ER

PT J
AU Yamamoto, K
Tanaka, S
Kobayashi, H
Kozima, H
Hashiya, K
AF Yamamoto, Kentaro
Tanaka, Saori
Kobayashi, Hiromi
Kozima, Hideki
Hashiya, Kazuhide
TI A Non-Humanoid Robot in the "Uncanny Valley": Experimental Analysis of
the Reaction to Behavioral Contingency in 2-3 Year Old Children
SO PLOS ONE
LA English
DT Article
ID SOCIAL CONTINGENCY; INFANTS; RECOGNITION; PREFERENCE
AB Infants' sensitivity to social or behavioral contingency has been examined in
the field of developmental psychology and behavioral sciences, mainly using a
double video paradigm or a still face paradigm. These studies have shown that
infants distinguish other individuals' contingent behaviors from non-contingent
ones. The present experiment systematically examined if this ability extends to the
detection of non-humanoids' contingent actions in a communicative context. We
examined two-to three-year-olds' understanding of contingent actions produced by a
non-humanoid robot. The robot either responded contingently to the actions of the
participants (contingent condition) or programmatically reproduced the same
sequence of actions to another participant (non-contingent condition). The results
revealed that the participants exhibited different patterns of response depending
on whether or not the robot responded contingently. It was also found that the
participants did not respond positively to the contingent actions of the robot in
the earlier periods of the experimental sessions. This might reflect the conflict
between the non-humanlike appearance of the robot and its humanlike contingent
actions, which presumably led the children to experience the uncanny valley effect.
RP Yamamoto, K (reprint author), Kyushu Univ, Grad Sch Human Environm Studies,
Fukuoka 812, Japan.
EM yama-ken@kyudai.jp; hashiya@mindless.com
CR Arita A, 2005, COGNITION, V95, pB49, DOI 10.1016/j.cognition.2004.08.001
Bigelow AE, 1999, INFANT BEHAV DEV, V22, P367, DOI 10.1016/S0163-6383(99)00016-8
COHEN LB, 1975, INFANT PERCEPTION SE, V1, P347
de Haan M, 2002, J COGNITIVE NEUROSCI, V14, P199, DOI 10.1162/089892902317236849
ELLSWORTH CP, 1993, DEV PSYCHOL, V29, P63, DOI 10.1037//0012-1649.29.1.63
FUJITA K, 1987, PRIMATES, V28, P353, DOI 10.1007/BF02381018
GUSELLA JL, 1988, CHILD DEV, V59, P1111
Hains SMJ, 1996, INFANT BEHAV DEV, V19, P49, DOI 10.1016/S0163-6383(96)90043-0
Johnson SC, 2001, COGNITIVE DEV, V16, P637, DOI 10.1016/S0885-2014(01)00043-0
Johnson S, 1998, DEVELOPMENTAL SCI, V1, P233, DOI 10.1111/1467-7687.00036
Kozima H., 2006, MOBILE ROBOTS NEW AP, P269
KOZIMA H, 2004, INT J ARTIFICIAL LIF, V8, P83
Kozima H, 2009, INT J SOC ROBOT, V1, P3, DOI 10.1007/s12369-008-0009-8
MacDorman KF, 2009, COMPUT HUM BEHAV, V25, P695, DOI 10.1016/j.chb.2008.12.026
MELTZOFF AN, 2005, PERSPECTIVES IMITATI, V2, P55
Meltzoff A. N., 2001, INTENTIONS INTENTION, P171
Meltzoff AN, 2007, DEVELOPMENTAL SCI, V10, P126, DOI 10.1111/j.1467-
7687.2007.00574.x
Mori M., 1970, ENERGY, V7, P33, DOI DOI 10.1109/MRA.2012.2192811
Murray L., 1985, SOCIAL PERCEPTION IN, P177
Nadel J, 1999, DEVELOPMENTAL SCI, V2, P164, DOI 10.1111/1467-7687.00065
Plenderleith M, 2005, BIOL LETT-UK, V1, P411, DOI 10.1098/rsbl.2005.0355
RYAN TA, 1960, PSYCHOL BULL, V57, P318, DOI 10.1037/h0044320
Sanefuji W, 2006, INFANT BEHAV DEV, V29, P584, DOI 10.1016/j.infbeh.2006.07.007
Sanefuji W, 2008, INFANT BEHAV DEV, V31, P624, DOI 10.1016/j.infbeh.2008.07.003
Stormark KM, 2004, INFANT BEHAV DEV, V27, P195, DOI 10.1016/j.infbeh.2003.09.004
Striano T, 2004, CHILD DEV, V75, P468, DOI 10.1111/j.1467-8624.2004.00687.x
Tanaka F, 2007, P NATL ACAD SCI USA, V104, P17954, DOI 10.1073/pnas.0707769104
TODA S, 1993, DEV PSYCHOL, V29, P532, DOI 10.1037//0012-1649.29.3.532
TRONICK E, 1978, J AM ACAD CHILD PSY, V17, P1, DOI 10.1016/S0002-7138(09)62273-1
Whaling CS, 1997, P NATL ACAD SCI USA, V94, P12694, DOI 10.1073/pnas.94.23.12694
NR 30
TC 4
Z9 4
U1 1
U2 4
PU PUBLIC LIBRARY SCIENCE
PI SAN FRANCISCO
PA 185 BERRY ST, STE 1300, SAN FRANCISCO, CA 94107 USA
SN 1932-6203
J9 PLOS ONE
JI PLoS One
PD SEP 16
PY 2009
VL 4
IS 9
AR e6974
DI 10.1371/journal.pone.0006974
PG 6
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA 496KE
UT WOS:000269970000004
PM 19759818
OA gold
DA 2018-01-22
ER

PT J
AU Yoshida, E
Laumond, JP
Esteves, C
Kanoun, O
Mallet, A
Sakaguchi, T
Yokoi, K
AF Yoshida, Eiichi
Laumond, Jean-Paul
Esteves, Claudia
Kanoun, Oussama
Mallet, Anthony
Sakaguchi, Takeshi
Yokoi, Kazuhito
TI Motion autonomy for humanoids: experiments on HRP-2 No. 14
SO COMPUTER ANIMATION AND VIRTUAL WORLDS
LA English
DT Article
DE motion planning; humanoid; whole-body motoin; locomotion
AB This paper deals with whole-body motion planning and dynamic control for
humanoid from two aspects: locomotion including manipulation and reaching. In the
first part, we address a problem of simultaneous locomotion and manipulation
planning that combines a geometric and kinematic motion planner with a dynamic
humanoid motion generator. The second part deals with whole-body reaching tasks by
using a generalized inverse kinematics (IK) method to fully exploit the high
redundancy of the humanoid robot. Through experiments using humanoid platform HRP-2
No. 14 installed at LAAS-CNRS, we first verify the validity of each method. An
integrated experiment is then presented that unifies the both results via visual
perception to execute an object-fetching task. Copyright (C) 2009 John Wiley &
Sons, Ltd.
C1 [Yoshida, Eiichi] Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res Inst,
Tsukuba, Ibaraki 3058568, Japan.
[Laumond, Jean-Paul; Kanoun, Oussama; Mallet, Anthony] CNRS, LAAS, F-31077
Toulouse, France.
[Esteves, Claudia] Univ Guanajuato, Fac Math, Mexico City, DF, Mexico.
[Mallet, Anthony] Ecole Polytech Fed Lausanne, ASL Lab, CH-1015 Lausanne,
Switzerland.
[Yokoi, Kazuhito] Univ Tsukuba, Cooperat Grad Sch, Tsukuba, Ibaraki 305, Japan.
RP Yoshida, E (reprint author), Natl Inst Adv Ind Sci & Technol, Intelligent Syst
Res Inst, 1-1-1 Umezono, Tsukuba, Ibaraki 3058568, Japan.
EM e.yoshida@aist.go.jp
RI Yoshida, Eiichi/M-3756-2016; Yokoi, Kazuhito/K-2046-2012; Sakaguchi,
Takeshi/M-4024-2016
OI Yoshida, Eiichi/0000-0002-3077-6964; Yokoi,
Kazuhito/0000-0003-3942-2027; Sakaguchi, Takeshi/0000-0002-2726-7448
FU Japan Society for the Promotion of Science (JSPS) [18300070]
FX This research was partially Supported by Japan Society for the Promotion
of Science (JSPS) Grant-in-Aid for Scientific Research (B), 18300070,
2006.
CR Baerlocher P, 2004, VISUAL COMPUT, V20, P402, DOI 10.1007/s00371-004-0244-4
Bradski G. R., 1998, INTEL TECHNOLOGY J
CHESTNUTT J, 2004, P IEEE RAS C HUM ROB, P422
Esteves C, 2006, ACM T GRAPHIC, V25, P319, DOI 10.1145/1138450.1138457
Gleicher M, 2001, GRAPH MODELS, V63, P107, DOI 10.1006/gmod.2001.0549
HARADA H, 2005, P 2005 IEEE INT C RO, P1712
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
Kuffner JJ, 2002, AUTON ROBOT, V12, P105, DOI 10.1023/A:1013219111657
Laumond J.-P., 1998, LECT NOTES CONTROL I, V229
Nakamura Y, 1991, ADV ROBOTICS REDUNDA
Neo E.S., 2006, P IEEE RAS INT C HUM, P327
OKADA K, 2004, P INT C ROB AUT ICRA, P3207
Okada K., 2006, P 6 IEEE RAS INT C H, P7
Sentis L., 2005, INT J HUM ROBOT, V2, P505, DOI DOI 10.1142/S0219843605000594
Sian N., 2004, P IEEE RAS INT C HUM, P608
Siciliano B., 1991, P IEEE INT C ADV ROB, P1211
STILMAN M, 2004, P IEEE INT C HUM ROB, P322
Sugihara T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1404, DOI 10.1109/ROBOT.2002.1014740
Yamane K, 2004, ACM T GRAPHIC, V23, P532, DOI 10.1145/1015706.1015756
Yoshida Eiichi, 2008, P 2008 IEEE INT C RO, P1712
YOSHIDA E, 2006, P IEEE RSJ INT C INT, P827
YOSHIDA E, 2006, P IEEE RAS INT C HUM, P208
Yoshida E., 2005, P 2005 IEEE RSJ INT, P25
YOSHIDA H, 2001, P IEEE ASME INT C AD, P266
YOSHIKAWA T, 1985, INT J ROBOT RES, V4, P3, DOI 10.1177/027836498500400201
NR 27
TC 2
Z9 2
U1 0
U2 2
PU WILEY-BLACKWELL
PI HOBOKEN
PA 111 RIVER ST, HOBOKEN 07030-5774, NJ USA
SN 1546-4261
EI 1546-427X
J9 COMPUT ANIMAT VIRT W
JI Comput. Animat. Virtual Worlds
PD SEP-DEC
PY 2009
VL 20
IS 5-6
SI SI
BP 511
EP 522
DI 10.1002/cav.280
PG 12
WC Computer Science, Software Engineering
SC Computer Science
GA 516RI
UT WOS:000271559700004
DA 2018-01-22
ER

PT J
AU Yamada, M
Sano, S
Uchiyama, N
AF Yamada, Moyuru
Sano, Shigenori
Uchiyama, Naoki
TI Landing Control of Foot with Springs for Walking Robots on Rough Terrain
SO INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
LA English
DT Article
DE Landing Control; Unknown Terrain; Foot with Springs; Biped Walking
ID PATTERN GENERATION; PREVIEW CONTROL; HUMANOID ROBOT; BIPED ROBOT; POINT
AB Landing control is one of the important issues for biped walking robot, because
robots are expected to walk on not only known flat surfaces but also unknown and
uneven terrain for working at various fields. This paper presents a new controller
design for a robotic foot to land on unknown terrain. The robotic foot considered
in this study equips springs to reduce the impact force at the foot landing. There
are two objectives in the landing control; achieving the desired ground reaction
force and positioning the foot on unknown terrain. To achieve these two objectives
simultaneously by adjusting the foot position, we propose a PI force controller
with a desired foot position, which guarantees the robust stability of control
system with respect to terrain variance, and exact positioning of the foot to
unknown terrain. Simulation results using the Open Dynamics Engine demonstrate the
effectiveness of the proposed controller.
C1 [Sano, Shigenori; Uchiyama, Naoki] Toyohashi Univ Technol, Dept Mech Engn,
Toyohashi, Aichi, Japan.
[Yamada, Moyuru] Toyohashi Univ Technol, Grad Sch Engn, Toyohashi, Aichi, Japan.
RP Sano, S (reprint author), Toyohashi Univ Technol, Dept Mech Engn, Toyohashi,
Aichi, Japan.
EM sano@mech.tut.ac.jp
CR Farrell KD, 2007, IEEE INT CONF ROBOT, P3591, DOI 10.1109/ROBOT.2007.364028
Goswami A, 1999, INT J ROBOT RES, V18, P523, DOI 10.1177/02783649922066376
HASHIMOTO K, 2005, P 23 J ROB SOC JAP
HASHIMOTO K, 2007, J ROBOTICS SOC JAPAN, V25, P851
Hashimoto K, 2007, IEEE INT CONF ROBOT, P1869, DOI 10.1109/ROBOT.2007.363594
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
HIRUKAWA H, 2007, P IEEE INT C ROB AUT, P3989
Hinikawa H., 2006, P IEEE INT C ROB AUT
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kajita S, 2001, IEEE INT CONF ROBOT, P2299, DOI 10.1109/ROBOT.2001.932965
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Matsuda T., 2007, J ROBOTICS SOC JAPAN, V25, P231
MINAKATA H, 2006, P 20006 IEEE INT WOR, P386
MORISAWA M, AIST2006 C, P581
NAKAJIMA S, 2006, T JAPAN SOC MECH E C, V72, P2940
NAKANO E, 2004, INTELLIGENT LOCOMOTI, P153
Park J, 2007, IEEE INT CONF ROBOT, P2682, DOI 10.1109/ROBOT.2007.363870
SANO S, 2008, P IEEE INT WORKSH AD, V2, P480
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
Yamaguchi J., 1996, J ROBOTICS SOC JAPAN, V14, P546
NR 20
TC 1
Z9 1
U1 0
U2 3
PU I-TECH EDUCATION AND PUBLISHING
PI VIENNA
PA ZIEGLERGASSE 14, VIENNA, EUROPEAN UNION 1070, AUSTRIA
SN 1729-8806
J9 INT J ADV ROBOT SYST
JI Int. J. Adv. Robot. Syst.
PD SEP
PY 2009
VL 6
IS 3
BP 201
EP 206
PG 6
WC Robotics
SC Robotics
GA 698DO
UT WOS:000285572100005
DA 2018-01-22
ER

PT J
AU Michel, P
Chestnutt, J
Kagami, S
Nishiwaki, K
Kuffner, JJ
Kanade, T
AF Michel, Philipp
Chestnutt, Joel
Kagami, Satoshi
Nishiwaki, Koichi
Kuffner, James J.
Kanade, Takeo
TI MOTION PLANNING USING PREDICTED PERCEPTIVE CAPABILITY
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article; Proceedings Paper
CT Workshop on the Active Vision of Humanoids held at the Conference on
Humanoid Rotobics
CY NOV, 2007
CL Pittsburgh, PA
DE Planning; vision; manipulation; navigation
AB We present an approach to motion planning for humanoid robots that aims to
ensure reliable execution by augmenting the planning process to reason about the
robot's ability to successfully perceive its environment during operation. By
efficiently simulating the robot's perception system during search, our planner
utilizes a perceptive capability metric that quantifies the 'sensability' of the
environment in each state given the task to be accomplished. We have applied our
method to the problem of planning robust autonomous grasping motions and walking
sequences as performed by an HRP-2 humanoid. A fast GPU-accelerated 3D tracker is
used for perception, with a grasp planner and footstep planner incorporating
reasoning about the robot's perceptive capability. Experimental results show that
considering information about the predicted perceptive capability ensures that
sensing remains operational throughout the grasping or walking sequence and yields
higher task success rates than perception-unaware planning.
C1 [Michel, Philipp; Kuffner, James J.; Kanade, Takeo] Carnegie Mellon Univ, Inst
Robot, Pittsburgh, PA 15213 USA.
[Chestnutt, Joel; Kagami, Satoshi; Nishiwaki, Koichi; Kuffner, James J.; Kanade,
Takeo] Natl Inst Adv Ind Sci & Technol, Digital Human Res Ctr, Koto Ku, Tokyo
1350064, Japan.
RP Michel, P (reprint author), Carnegie Mellon Univ, Inst Robot, 5000 Forbes Ave,
Pittsburgh, PA 15213 USA.
EM pmichel@cs.cmu.edu; joel.chestnutt@aist.go.jp; s.kagami@aist.go.jp;
k.nishiwaki@aist.go.jp; kuffner@cs.cmu.edu; tk@cs.cmu.edu
RI Kagami, Satoshi/A-6841-2013
CR Berenson D., 2007, IEEE RAS INT C HUM R
Bourgault F, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P540, DOI 10.1109/IRDS.2002.1041446
CHESTNUTT J, 2007, THESIS CARNEGIE MELL
GANCET J, 2003, P IEEE RSJ INT C INT
Hutchinson S, 1996, IEEE T ROBOTIC AUTOM, V12, P651, DOI 10.1109/70.538972
KALRA N, 2006, IEEE INT C ROB AUT I, P4300
Kuffner J., 2000, IEEE INT C ROB AUT, P995
Kyrki V, 2004, IEEE INT CONF ROBOT, P1861, DOI 10.1109/ROBOT.2004.1308095
LaValle SM, 1997, IEEE INT CONF ROBOT, P731, DOI 10.1109/ROBOT.1997.620122
MICHEL P, 2007, IEEE RSJ INT C INT R
NABBE B, 2005, THESIS CARNEGIE MELL
*NVIDIA, NVIDIA OCCL QUER EXT
PITO R, 1996, IAPR INT C PATT REC, P941
Reed MK, 2000, IEEE T PATTERN ANAL, V22, P1460, DOI 10.1109/34.895979
ROY N, 1999, ADV NEURAL PROCESSIN, V12, P1043
SUTANTO H, 1997, IEEE INT S ASS TASK, P237
Weiss L., 1987, IEEE J ROBOTICS AUTO, V3
NR 17
TC 4
Z9 4
U1 0
U2 3
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD SEP
PY 2009
VL 6
IS 3
BP 435
EP 457
DI 10.1142/S0219843609001826
PG 23
WC Robotics
SC Robotics
GA 497FH
UT WOS:000270041900005
DA 2018-01-22
ER

PT J
AU Morimoto, J
Atkeson, CG
AF Morimoto, Jun
Atkeson, Christopher G.
TI Nonparametric representation of an approximated Poincar, map for
learning biped locomotion
SO AUTONOMOUS ROBOTS
LA English
DT Article
DE Bipedal walking; Reinforcement learning; Poincare map; Gaussian
processes; Humanoid robot
ID POLICY GRADIENT-METHOD; GAUSSIAN-PROCESSES; DYNAMIC WALKING;
REINFORCEMENT; ROBOT; REGRESSION; MODEL
AB We propose approximating a Poincar, map of biped walking dynamics using Gaussian
processes. We locally optimize parameters of a given biped walking controller based
on the approximated Poincar, map. By using Gaussian processes, we can estimate a
probability distribution of a target nonlinear function with a given covariance.
Thus, an optimization method can take the uncertainty of approximated maps into
account throughout the learning process. We use a reinforcement learning (RL)
method as the optimization method. Although RL is a useful non-linear optimizer, it
is usually difficult to apply RL to real robotic systems due to the large number of
iterations required to acquire suitable policies. In this study, we first
approximated the Poincar, map by using data from a real robot, and then applied RL
using the estimated map in order to optimize stepping and walking policies. We show
that we can improve stepping and walking policies both in simulated and real
environments. Experimental validation on a humanoid robot of the approach is
presented.
C1 [Morimoto, Jun] Japan Sci & Technol Agcy, ICORP, Computat Brain Project,
Saitama, Japan.
[Morimoto, Jun] ATR Computat Neurosci Labs, Dept Brain Robot Interface, Kyoto,
Japan.
[Atkeson, Christopher G.] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213
USA.
RP Morimoto, J (reprint author), Japan Sci & Technol Agcy, ICORP, Computat Brain
Project, Saitama, Japan.
EM xmorimo@atr.jp; cga@cs.cmu.edu
FU National Science Foundation [ECS-0325383, EEC-0540865, ECS-0824077]
FX We thank Sony Corp. for allowing us the opportunity to use the small
humanoid robot. We thank Gen Endo for helping us to setup experiments.
We also thank Gordon Cheng for helpful discussions. This material is
based upon work supported in part by the National Science Foundation
under grants ECS-0325383, EEC-0540865, and ECS-0824077.
CR Abbeel P., 2006, P 23 INT C MACH LEAR, P1
Atkeson CG, 1998, ADV NEUR IN, V10, P1008
Atkeson C. G., 1997, P 14 INT C MACH LEAR, P12
BAGNELL A, 2003, P 18 INT JOINT C ART, P1019
Baird L, 1999, ADV NEUR IN, V11, P968
Benbrahim H, 1997, ROBOT AUTON SYST, V22, P283, DOI 10.1016/S0921-8890(97)00043-
2
Bishop C. M., 2006, PATTERN RECOGNITION
BYL K, 2008, P ROB SCI SYST ZUR S, V4
DEARDEN R, 1999, P 15 C UNC ART INT S, P457
DERLINDE RQV, 1999, BIOL CYBERN, V82, P227
Doya K, 2000, NEURAL COMPUT, V12, P219, DOI 10.1162/089976600300015961
Endo G, 2008, INT J ROBOT RES, V27, P213, DOI 10.1177/0278364907084980
Ghavamzadeh M., 2007, ADV NEURAL INFORM PR, V19, P457
HIRAI K, 1998, P 1998 IEEE INT C RO, P160
Howard M, 2009, AUTON ROBOT, V27, P105, DOI 10.1007/s10514-009-9129-8
Hyon SH, 2007, IEEE T ROBOT, V23, P884, DOI 10.1109/TRO.2007.904896
Jaakkola T., 1995, Advances in Neural Information Processing Systems 7, P345
Kajita S, 2007, IEEE ROBOT AUTOM MAG, V14, P63, DOI 10.1109/MRA.2007.380655
Kakade S, 2002, ADV NEUR IN, V14, P1531
Kimura H, 1998, P 15 INT C MACH LEAR, P284
Ko J, 2009, AUTON ROBOT, V27, P75, DOI 10.1007/s10514-009-9119-x
Konda VR, 2003, SIAM J CONTROL OPTIM, V42, P1143, DOI 10.1137/S0363012901385691
Kuvayev L., 1996, P 9 YAL WORKSH AD LE, P101
Matsubara T, 2006, ROBOT AUTON SYST, V54, P911, DOI 10.1016/j.robot.2006.05.012
MCGEER T, 1990, INT J ROBOT RES, V9, P62, DOI 10.1177/027836499000900206
MEULEAU N, 2001, 2001003 AI MIT
MIURA H, 1984, INT J ROBOT RES, V3, P60, DOI 10.1177/027836498400300206
MIYAZAKI F, 1981, 8 IFAC, P43
Morimoto J, 2001, ROBOT AUTON SYST, V36, P37, DOI 10.1016/S0921-8890(01)00113-0
Morimoto J, 2006, IEEE INT CONF ROBOT, P1579, DOI 10.1109/ROBOT.2006.1641932
Morimoto J, 2008, IEEE T ROBOT, V24, P185, DOI 10.1109/TRO.2008.915457
Morimoto J, 2007, IEEE ROBOT AUTOM MAG, V14, P41, DOI 10.1109/MRA.2007.380654
NAGASAKA K, 2004, P IEEE INT C ROB AUT, P3189
Nagasaka K., 1999, P 17 ANN C ROB SOC J, P1193
Peters J, 2008, NEURAL NETWORKS, V21, P682, DOI 10.1016/j.neunet.2008.02.003
Peters J, 2008, NEUROCOMPUTING, V71, P1180, DOI 10.1016/j.neucom.2007.11.026
Quinonero-Candela JQ, 2005, J MACH LEARN RES, V6, P1939
Rasmussen CE, 2004, ADV NEUR IN, V16, P751
Rasmussen CE, 2005, ADAPT COMPUT MACH LE, P1
Riedmiller M, 2009, AUTON ROBOT, V27, P55, DOI 10.1007/s10514-009-9120-4
SHIRIAEV AS, 2005, P 44 IEEE C DEC CONT, P4035
Smola AJ, 2001, ADV NEUR IN, V13, P619
Snelson E., 2006, ADV NEURAL INFORM PR, V18, P1257
Strogatz S.H., 1994, NONLINEAR DYNAMICS C
SUGIHARA T, 2002, IEEE INT C ROB AUT W
Sugihara T., 2005, IEEE INT C ROB AUT I, P306
Sutton RS, 2000, ADV NEUR IN, V12, P1057
Sutton RS, 1998, REINFORCEMENT LEARNI
Tedrake R., 2004, P IEEE INT C INT ROB, P2849
Tsuchiya K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1745
Westervelt ER, 2004, INT J ROBOT RES, V23, P559, DOI 10.1177/0278364904044410
Williams CKI, 1996, ADV NEUR IN, V8, P514
NR 52
TC 14
Z9 14
U1 3
U2 5
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 0929-5593
EI 1573-7527
J9 AUTON ROBOT
JI Auton. Robot.
PD AUG
PY 2009
VL 27
IS 2
BP 131
EP 144
DI 10.1007/s10514-009-9133-z
PG 14
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA 496CT
UT WOS:000269947000005
DA 2018-01-22
ER

PT J
AU Stasse, O
Verrelst, B
Vanderborght, B
Yokoi, K
AF Stasse, Olivier
Verrelst, Bjrorn
Vanderborght, Bram
Yokoi, Kazuhito
TI Strategies for Humanoid Robots to Dynamically Walk Over Large Obstacles
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Humanoid robots; obstacle negotiation; trajectory planning
AB This study proposes a complete solution to make the humanoid robot HRP-2
dynamically step over large obstacles. As compared with previous results using
quasistatic stability, where the robot crosses over a 15-cm obstacle in 40 s, our
solution allows HRP-2 to step over the same obstacle in 4 s. This approach allows
the robot to clear obstacles as high as 21% of the robot's leg length (15 cm) while
walking. Simulations show the possibility to step over an obstacle that is 35% of
the length (25 cm) with a margin of 3 cm.
C1 [Stasse, Olivier; Verrelst, Bjrorn; Yokoi, Kazuhito] Adv Ind Sci & Technol,
Joint Robot Lab,CNRS, Unite Mixte Int 3218, Collaborat Res Team, Tsukuba 3058568,
Japan.
[Vanderborght, Bram] Vrije Univ Brussel, Robot & Multibody Mech Res Grp, B-1050
Brussels, Belgium.
RP Stasse, O (reprint author), Adv Ind Sci & Technol, Joint Robot Lab,CNRS, Unite
Mixte Int 3218, Collaborat Res Team, Tsukuba 3058568, Japan.
EM olivier.stasse@aist.go.jp; bjorn.verrelst@vub.ac.be;
bram.vanderborght@vub.ac.be; kazuhito.yokoi@aist.go.jp
RI Stasse, Olivier/E-6220-2010; Vanderborght, Bram/A-1599-2008; Yokoi,
Kazuhito/K-2046-2012
OI Vanderborght, Bram/0000-0003-4881-9341; Yokoi,
Kazuhito/0000-0003-3942-2027
FU Japan Society for Promotion of Science; Scientific Research-Flanders
(Belgium, FWO)
FX This work was supported by the Japan Society for Promotion of Science
under Grant Postdoctoral Fellowship. The work of B. Vanderborght was
supported by the Fund for Scientific Research-Flanders (Belgium, FWO).
CR Buschmann T., 2007, P IEEE RAS INT C HUM, P1
CHESTNUTT J, 2003, P IEEE INT C HUM ROB, P13
Cormen T. H., 1996, INTRO ALGORITHMS
CUPEC R, 2005, P IEEE RSJ INT C INT, P3089
Guan Y., 2005, P IEEE INT C ROB AUT, P1066
Guan YH, 2006, IEEE T ROBOT, V22, P958, DOI 10.1109/TRO.2006.878962
JARFI AR, 2006, P IEEE INT C ROB BIO, P1348
Kajita S, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1644
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Kajita S., 2005, HUMANOID ROBOT
Kaneko K., 2005, P IEEE RSJ INT C INT, P1457
Lorch O, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P2484, DOI 10.1109/IRDS.2002.1041642
MICHEL P, 2005, P IEEE RAS INT C HUM, P13
Morisawa M, 2007, IEEE INT CONF ROBOT, P3989, DOI 10.1109/ROBOT.2007.364091
Nakaoka S, 2007, P 2007 IEEE RSJ INT, P3641
Raibert M.H., 1986, LEGGED ROBOTS BALANC
Seara JF, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P31, DOI 10.1109/IRDS.2002.1041357
SEARA JF, 2001, IEEE RAS INT C HUM R
VERRELST B, 2006, P IEEE RAS INT C HUM, P117
VERRELST B, 2006, P IEEE ICMA, P1072
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Wieber P. B., 2006, P IEEE RAS INT C HUM, P137, DOI DOI 10.1109/ICHR.2006.321375
Yagi M, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P375, DOI 10.1109/ROBOT.1999.770007
NR 23
TC 20
Z9 22
U1 0
U2 5
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD AUG
PY 2009
VL 25
IS 4
BP 960
EP 967
DI 10.1109/TRO.2009.2020354
PG 8
WC Robotics
SC Robotics
GA 480SP
UT WOS:000268757400020
DA 2018-01-22
ER
PT J
AU Yang, W
Chong, NY
AF Yang, Woosung
Chong, Nak Young
TI Imitation Learning of Humanoid Locomotion Using the Direction of Landing
Foot
SO INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS
LA English
DT Article
DE Biped locomotion; humanoid robot; imitation learning; self-adjusting
adaptor; ZMP
ID DYNAMICS COMPUTATION
AB Since it is quite difficult to create motions for humanoid robots having a
fairly large number of degrees of freedom, it would be very convenient indeed if
robots could observe and imitate what they want to create. To this end, this paper
discusses how humanoid robots can learn through imitation taking into consideration
the fact that demonstrator and imitator robots may have different kinematics and
dynamics. As part of a wider interest in humanoid motion generation in general,
this work mainly investigates how imitator robots adapt a reference locomotion gait
copied from a demonstrator robot. Specifically, the self-adjusting adaptor is
proposed, where the perceived locomotion pattern is modified to keep the direction
of the lower leg contacting the ground identical between the demonstrator and the
imitator, and to sustain dynamic stability by controlling the position of the
center of mass. The validity of the proposed scheme is verified through simulations
on OpenHRP and real experiments.
C1 [Yang, Woosung] Korea Inst Sci & Technol, Ctr Cognit Robot Res, Seoul, South
Korea.
[Chong, Nak Young] Japan Adv Inst Sci & Technol, Sch Informat Sci, Nomi,
Ishikawa, Japan.
RP Yang, W (reprint author), Korea Inst Sci & Technol, Ctr Cognit Robot Res, Seoul,
South Korea.
EM wsyang@kist.re.kr; nakyoung@jaist.ac.jp
FU Ministry of Education, Culture, Sports, Science and Technology of Japan;
MIC; IITA of Korea
FX This work was conducted as a program for the "Fostering Talent in
Emergent Research Fields" in Special Coordination Funds for the
Promotion of Science and Technology by the Ministry of Education,
Culture, Sports, Science and Technology of Japan. This work was also
supported in pan by MIC and IITA of Korea through IT Leading R&D Support
Project. [2009-S028-01, Development of Cooperative Network-based
Humanoids Technology]
CR Billard A, 2004, ROBOT AUTON SYST, V47, P65, DOI 10.1016/j.robot.2004.03.001
CALDERON CAA, 2005, P 3 INT S IM AN ART, P1
Dasgupta A, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P1044, DOI 10.1109/ROBOT.1999.772454
DEMIRIS J, 1997, P AAAI FALL S SOC IN, P28
Endo G., 2005, P 2005 IEEE INT C RO, P598
Ijspeert AJ, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS I-IV, PROCEEDINGS, P1398, DOI 10.1109/ROBOT.2002.1014739
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Kajita S, 2001, IEEE INT CONF ROBOT, P2299, DOI 10.1109/ROBOT.2001.932965
Kong JS, 2008, INT J CONTROL AUTOM, V6, P551
MATARIC M, 2002, SENSORY MOTOR PRIMIT, P392
Mataric MJ, 1998, COGNITIVE BRAIN RES, V7, P191, DOI 10.1016/S0926-
6410(98)00025-1
NAKAMURA Y, 1989, IEEE T ROBOTIC AUTOM, V5, P294, DOI 10.1109/70.34765
Nakamura Y, 2000, IEEE T ROBOTIC AUTOM, V16, P124, DOI 10.1109/70.843167
NEHANIV CL, 2000, INTERDISCIPLINARY AP, P136
Pratt J, 2001, INT J ROBOT RES, V20, P129, DOI 10.1177/02783640122067309
Samejima K., 2002, Transactions of the Institute of Electronics, Information and
Communication Engineers D-II, VJ85D-II, P90
Samejima K, 2003, NEURAL NETWORKS, V16, P985, DOI 10.1016/S0893-6080(02)00235-6
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
VUKOBRAT.M, 1969, IEEE T BIO-MED ENG, VBM16, P1, DOI 10.1109/TBME.1969.4502596
VUKOBRATOVIC M, 1973, IEEE T SYST MAN CYB, VSMC3, P497, DOI
10.1109/TSMC.1973.4309277
Yamamoto A, 2003, CURR METH INORG CHEM, V3, P1
YANG W, 2005, P INT C ADV ROB, P404
YANG W, 2006, P IEEE RSJ INT C INT, P5650
Yang W., 2007, P IEEE INT C INT ROB, P309
NR 24
TC 3
Z9 3
U1 0
U2 3
PU INST CONTROL ROBOTICS & SYSTEMS, KOREAN INST ELECTRICAL ENGINEERS
PI BUCHEON
PA BUCHEON TECHNO PARK 401-1506, 193 YAKDAE-DONG WONMI-GU, BUCHEON,
GYEONGGI-DO 420-734, SOUTH KOREA
SN 1598-6446
EI 2005-4092
J9 INT J CONTROL AUTOM
JI Int. J. Control Autom. Syst.
PD AUG
PY 2009
VL 7
IS 4
BP 585
EP 597
DI 10.1007/s12555-009-0410-6
PG 13
WC Automation & Control Systems
SC Automation & Control Systems
GA 477HF
UT WOS:000268509200010
DA 2018-01-22
ER

PT J
AU Dalla Libera, F
Minato, T
Fasel, I
Ishiguro, H
Pagello, E
Menegatti, E
AF Dalla Libera, Fabio
Minato, Takashi
Fasel, Ian
Ishiguro, Hiroshi
Pagello, Enrico
Menegatti, Emanuele
TI A new paradigm of humanoid robot motion programming based on touch
interpretation
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article; Proceedings Paper
CT 2nd Workshop on Humanoid Soccer Robots
CY NOV 29, 2007
CL Pittsburgh, PA
DE Motion editor; Human-robot interaction; Touch
AB Most humanoid soccer robot teams design the basic movements of their robots,
like walking and kicking, off-line and manually. Once these motions are considered
satisfactory, they are stored in the robot's memory and played according to a high
level behavioral strategy. Much time is spent in the development of the movements,
and despite the significant progress made in humanoid soccer robots, the interfaces
employed for the development of motions are still quite primitive. In order to
accelerate development, an intuitive instruction method is desired. We propose the
development of robot motions through physical interaction. In this paper we propose
a "teaching by touching" approach; the human operator teaches a motion by directly
touching the robot's body parts like a dance instructor. Teaching by directly
touching is intuitive for instructors. However, the robot needs to interpret the
instructor's intention since tactile communication can be ambiguous. This paper
presents a method to learn the interpretation of the touch meaning and
investigates, through experiments, a general (shared among different users) and
intuitive touch manner. (C) 2009 Elsevier B.V. All rights reserved.
C1 [Dalla Libera, Fabio; Pagello, Enrico; Menegatti, Emanuele] Univ Padua, Fac
Engn, Intelligent Autonomous Syst Lab, Dept Informat Engn, I-35131 Padua, Italy.
[Minato, Takashi; Ishiguro, Hiroshi] Osaka Univ, ERATO, Japan Sci & Technol
Agcy, Suita, Osaka 5650871, Japan.
[Ishiguro, Hiroshi] Osaka Univ, Dept Adapt Machine Syst, Suita, Osaka 5650871,
Japan.
[Fasel, Ian] Univ Texas Austin, Dept Comp Sci, Austin, TX 78712 USA.
RP Dalla Libera, F (reprint author), Univ Padua, Fac Engn, Intelligent Autonomous
Syst Lab, Dept Informat Engn, Via Gradenigo 6-A, I-35131 Padua, Italy.
EM fabiodl@gmail.com
OI Menegatti, Emanuele/0000-0001-5794-9979
CR BALTES J, 2006, HUMANOID ROBOTS ABAR
Borg I., 2005, MODERN MULTIDIMENSIO
FURUTA T, 1999, J ROBOTICS MECHATRON, V11, P304
Grunwald G, 2001, ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, P347, DOI
10.1109/ROMAN.2001.981928
GUPTA VN, 1978, IEEE T ACOUST SPEECH, V26, P27, DOI 10.1109/TASSP.1978.1163054
Henry JL, 2001, ICCIMA 2001: FOURTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL
INTELLIGENCE AND MULTIMEDIA APPLICATIONS, PROCEEDINGS, P357, DOI
10.1109/ICCIMA.2001.970494
HOERL AE, 1970, TECHNOMETRICS, V12, P55
LIDA S, 2004, P 2004 INT S MICR HU, P353
Michie D., 1994, MACHINE LEARNING NEU
NAKAOKA S, 2003, IEEE INT C ROB AUT I, V3, P2905
NODA T, 2007, IEEE RSJ INT C INT R, P1099
OKUMURA Y, 2003, IEEE RSJ INT C INT R, V1, P335
STCKLER J, P DYN WALK 2006, P29
Takeda T, 2007, IEEE RSJ INT C INT R, P3258
VOYLES RM, 1995, IROS '95 - 1995 IEEE/RSJ INTERNATIONAL CONFERENCE ON
INTELLIGENT ROBOTS AND SYSTEMS: HUMAN ROBOT INTERACTION AND COOPERATIVE ROBOTS,
PROCEEDINGS, VOL 3, P7, DOI 10.1109/IROS.1995.525854
Wama T, 2004, LECT NOTES COMPUT SC, V3166, P14
Yamane K, 2003, IEEE T VIS COMPUT GR, V9, P352, DOI 10.1109/TVCG.2003.1207443
YAMASAKI F, 2002, P 2002 IEEE INT C RO
YOSHIKAI T, 2007, IEEE RAS INT C HUM R
NR 19
TC 3
Z9 3
U1 0
U2 4
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD JUL 31
PY 2009
VL 57
IS 8
SI SI
BP 846
EP 859
DI 10.1016/j.robot.2009.03.013
PG 14
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 465ZS
UT WOS:000267630400010
DA 2018-01-22
ER

PT J
AU Mayer, NM
Boedecker, J
Asada, M
AF Mayer, Norbert Michael
Boedecker, Joschka
Asada, Minoru
TI Robot motion description and real-time management with the Harmonic
Motion Description Protocol
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article; Proceedings Paper
CT 2nd Workshop on Humanoid Soccer Robots
CY NOV 29, 2007
CL Pittsburgh, PA
DE Humanoid robots; Motion design; Harmonic functions
ID HUMANOID CHALLENGE
AB We describe the Harmonic Motion Description Protocol (HMDP), that can serve as a
part in tools for rapid prototyping of behaviors in a hybrid simulation real robot
environment. In particular, we are focusing on the RoboCup 3D Soccer Simulation
League server that is currently evolving rapidly, and becomes a more and more
useful open source, general purpose simulation environment for physical Multiagent
systems. HMDP uses harmonic functions to describe motions. It allows for
superposition of motions and is therefore capable of describing parametric motions.
Thus, typical open loop motions (walking on spot, forward, turning, standing up) of
small humanoid robots are readily available and can be optimized by hand. In
conjunction with the HMDP some software tools and a small real-time motion
generator (called Motion Machine) have been developed. The current implementation
is very flexible to use and can easily be implemented in rather small embedded
systems. (C) 2009 Elsevier B.V. All rights reserved.
C1 [Mayer, Norbert Michael] Natl Chung Cheng Univ, Dept Elect Engn, Chiayi 62102,
Taiwan.
[Boedecker, Joschka; Asada, Minoru] Osaka Univ, Dept Mech Engn, Asada Project
Synergist Intelligence & Emergent R, Grad Sch Engn,ERATO,JST, Suita, Osaka 5650871,
Japan.
RP Mayer, NM (reprint author), Natl Chung Cheng Univ, Dept Elect Engn, 168 Univ Rd,
Chiayi 62102, Taiwan.
EM nmmayer@gmail.com
RI Boedecker, Joschka/A-8144-2010; Mayer, Norbert Michael/F-7958-2010
OI Boedecker, Joschka/0000-0002-3486-7345;
CR BALTES J, 2007, ROBOCUP S 2007
BEHNKE S, 2007, ROBOCUP S 2007
Bessonnet G, 2005, INT J ROBOT RES, V24, P523, DOI 10.1177/0278364905055377
BOEDECKER J, 2005, P 3 INT S AUT MIN RE
CALDERON CAA, 2007, ROBOCUP S 2007
CARPIN S, 2006, P WORKSH HUM SOCC RO, P71
DALLALIBERA F, 2007, 2007 IEEE RAS INT C
FRIEDMANN M, 2007, ROBOCUP S 2007
HENNING S, 2004, P 2004 IEEE INT C RO
HOFFMANN J, 2004, COMPUTER SCI 2004, V26, P275
HOWARD T, 2006, P 2006 IEEE RSJ INT, P4827
Kitano H, 2000, ADV ROBOTICS, V13, P723
KITANO H, 1997, AI MAGAZINE
KUROKI Y, 2004, P 2003 IEEE RSJ INT, V2, P1394
Mayer NM, 2008, INT J HUM ROBOT, V5, P335, DOI 10.1142/S0219843608001455
MAYER NM, 2007, P SEND WORKSH HUM SO
MAYER NM, 2007, ROBOCUP S 2007
MAYER NM, 2007, LECT NOTES COMPUTER, P25
MAYER NM, 2007, HMDP TECHNICAL DESCR
MENEGATTI E, 2007, ARTISTI HUMANOID TEA
Obst O, 2005, COMPUT SYST SCI ENG, V20, P347
*OSL TOLLT ASA, QT CROSS PLATF RICH
TAKAYAMA H, 2006, ROBOCUP 2006 S PAP T
TATANI K, 2003, P 2 INT S AD MOT AN
NR 24
TC 0
Z9 0
U1 0
U2 4
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD JUL 31
PY 2009
VL 57
IS 8
SI SI
BP 870
EP 876
DI 10.1016/j.robot.2009.03.014
PG 7
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 465ZS
UT WOS:000267630400012
DA 2018-01-22
ER

PT J
AU Guan, YS
Yokoi, K
Zhang, XM
AF Guan, Yisheng
Yokoi, Kazuhito
Zhang, Xianmin
TI Numerical Methods for Reachable Space Generation of Humanoid Robots (vol
27, pg 935, 2008)
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Correction
C1 [Guan, Yisheng; Zhang, Xianmin] S China Univ Technol, Sch Mech & Automot Engn,
Guangzhou 510640, Guangdong, Peoples R China.
[Yokoi, Kazuhito] Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res Inst,
JRL, Tsukuba, Ibaraki 3058568, Japan.
RP Guan, YS (reprint author), S China Univ Technol, Sch Mech & Automot Engn,
Guangzhou 510640, Guangdong, Peoples R China.
EM ysguan@scut.edu.cn; kazuhito.yokoi@aist.go.jp; zhangxm@scut.edu.cn
CR Guan Y, 2008, INT J ROBOT RES, V27, P935, DOI 10.1177/0278364908095142
NR 1
TC 0
Z9 1
U1 0
U2 4
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD JUL
PY 2009
VL 28
IS 7
BP 927
EP 927
PG 1
WC Robotics
SC Robotics
GA 462SX
UT WOS:000267378000006
DA 2018-01-22
ER

PT J
AU Nomura, T
Kawa, K
Suzuki, Y
Nakanishi, M
Yamasaki, T
AF Nomura, Taishin
Kawa, Kazuyoshi
Suzuki, Yasuyuki
Nakanishi, Masao
Yamasaki, Taiga
TI Dynamic stability and phase resetting during biped gait
SO CHAOS
LA English
DT Article
ID CENTRAL PATTERN GENERATORS; MUSCULO-SKELETAL SYSTEM; CATS STEP CYCLE;
FICTIVE LOCOMOTION; HUMANS; MODEL; OSCILLATORS; ENTRAINMENT; EMERGENCE;
WALKING
AB Dynamic stability during periodic biped gait in humans and in a humanoid robot
is considered. Here gait systems of human neuromusculoskeletal system and a
humanoid are simply modeled while keeping their mechanical properties plausible. We
prescribe periodic gait trajectories in terms of joint angles of the models as a
function of time. The equations of motion of the models are then constrained by one
of the prescribed gait trajectories to obtain types of periodically forced
nonlinear dynamical systems. Simulated gait of the models may or may not fall down
during gait, since the constraints are made only for joint angles of limbs but not
for the motion of the body trunk. The equations of motion can exhibit a limit cycle
solution (or an oscillatory solution that can be considered as a limit cycle
practically) for each selected gait trajectory, if an initial condition is set
appropriately. We analyze the stability of the limit cycle in terms of Poincare
maps and the basin of attraction of the limit cycle in order to examine how the
stability depends on the prescribed trajectory. Moreover, the phase resetting of
gait rhythm in response to external force perturbation is modeled. Since we always
prescribe a gait trajectory in this study, reacting gait trajectories during the
phase resetting are also prescribed. We show that an optimally prescribed reacting
gait trajectory with an appropriate amount of the phase resetting can increase the
gait stability. Neural mechanisms for generation and modulation of the gait
trajectories are discussed. (C) 2009 American Institute of Physics. [DOI:
10.1063/1.3138725]
C1 [Nomura, Taishin; Kawa, Kazuyoshi; Suzuki, Yasuyuki; Nakanishi, Masao] Osaka
Univ, Grad Sch Engn Sci, Osaka 5608531, Japan.
[Nomura, Taishin] Osaka Univ, Ctr Adv Med Engn & Informat, Osaka 5608531, Japan.
[Yamasaki, Taiga] Okayama Prefectural Univ, Okayama 7191197, Japan.
RP Nomura, T (reprint author), Osaka Univ, Grad Sch Engn Sci, Osaka 5608531, Japan.
RI Nomura, Taishin/J-8408-2012
OI Nomura, Taishin/0000-0002-6545-2097; Suzuki,
Yasuyuki/0000-0001-9681-7856
CR ANDERSSON O, 1983, ACTA PHYSIOL SCAND, V118, P229, DOI 10.1111/j.1748-
1716.1983.tb07267.x
Aoi S, 2005, AUTON ROBOT, V19, P219, DOI 10.1007/s10514-005-4051-1
BAEV KV, 1991, NEUROSCIENCE, V43, P237, DOI 10.1016/0306-4522(91)90431-M
Bottaro A, 2005, HUM MOVEMENT SCI, V24, P588, DOI 10.1016/j.humov.2005.07.006
Bottaro A, 2008, HUM MOVEMENT SCI, V27, P473, DOI 10.1016/j.humov.2007.11.005
Brown TG, 1914, J PHYSIOL-LONDON, V48, P18
Cabrera JL, 2002, PHYS REV LETT, V89, DOI 10.1103/PhysRevLett.89.158702
Darbon P, 2004, EUR J NEUROSCI, V20, P976, DOI 10.1111/j.1460-9568.2004.03565.x
Dimitrijevic MR, 1998, ANN NY ACAD SCI, V860, P360, DOI 10.1111/j.1749-
6632.1998.tb09062.x
Duysens J, 1998, GAIT POSTURE, V7, P131, DOI 10.1016/S0966-6362(97)00042-8
DUYSENS J, 1977, BRAIN RES, V133, P190, DOI 10.1016/0006-8993(77)90063-4
Eurich CW, 1996, PHYS REV E, V54, P6681, DOI 10.1103/PhysRevE.54.6681
FORSSBERG H, 1979, J NEUROPHYSIOL, V42, P936
GRILLNER S, 1979, EXP BRAIN RES, V34, P241
KAWATO M, 1981, J MATH BIOL, V12, P13
KOBAYASHI M, 2000, T JPN SOC MED BIOL E, V38, P20
Leblond H, 2000, J PHYSIOL-LONDON, V525, P225, DOI 10.1111/j.1469-7793.2000.t01-
1-00225.x
Milton JG, 2008, EPL-EUROPHYS LETT, V83, DOI 10.1209/0295-5075/83/48001
Nakanishi M, 2006, BIOL CYBERN, V95, P503, DOI 10.1007/s00422-006-0102-8
NASHNER LM, 1980, J NEUROPHYSIOL, V44, P650
Ogihara N, 2001, BIOL CYBERN, V84, P1, DOI 10.1007/PL00007977
ORLOVSKY GN, 1999, NEURONAL CONTROL LOC, P22409
RUSSELL DF, 1979, BRAIN RES, V177, P588, DOI 10.1016/0006-8993(79)90478-5
Rybak IA, 2006, J PHYSIOL-LONDON, V577, P617, DOI 10.1113/jphysiol.2006.118703
Schillings AM, 2000, J NEUROPHYSIOL, V83, P2093
Schomburg ED, 1998, EXP BRAIN RES, V122, P339, DOI 10.1007/s002210050522
TAGA G, 1995, BIOL CYBERN, V73, P97, DOI 10.1007/BF00204048
TAGA G, 1994, PHYSICA D, V75, P190, DOI 10.1016/0167-2789(94)90283-6
VUKOBRATOVIC M, 1990, BIBED LOCOMOTION, P22409
Yamasaki T, 2003, BIOSYSTEMS, V71, P221, DOI 10.1016/S0303-2647(03)00118-7
Yamasaki T, 2003, BIOL CYBERN, V88, P468, DOI 10.1007/s00422-003-0402-1
MODEL DATABASE PHYS, P22409
NR 32
TC 22
Z9 22
U1 1
U2 14
PU AMER INST PHYSICS
PI MELVILLE
PA CIRCULATION & FULFILLMENT DIV, 2 HUNTINGTON QUADRANGLE, STE 1 N O 1,
MELVILLE, NY 11747-4501 USA
SN 1054-1500
J9 CHAOS
JI Chaos
PD JUN
PY 2009
VL 19
IS 2
AR 026103
DI 10.1063/1.3138725
PG 12
WC Mathematics, Applied; Physics, Mathematical
SC Mathematics; Physics
GA 465PR
UT WOS:000267599800029
PM 19566263
DA 2018-01-22
ER

PT J
AU Mansard, N
Khatib, O
Kheddar, A
AF Mansard, Nicolas
Khatib, Oussama
Kheddar, Abderrahmane
TI A Unified Approach to Integrate Unilateral Constraints in the Stack of
Tasks
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Dynamics; humanoid robots; kinematics; redundant robots; visual servoing
ID ROBOT MANIPULATORS; SERVO CONTROL; REDUNDANCY; MECHANISMS; AVOIDANCE
AB The control approaches based on the task function formalism, and particularly
those structured as a prioritized hierarchy of tasks, enable complex behaviors with
elegant properties of robustness and portability to be built. However, it is
difficult to consider a straightforward integration of tasks described by
unilateral constraints in such frameworks. Indeed, unilateral constraints exhibit
irregularities that prevent the insertion of unilateral tasks at any priority
level, other than the lowest, of a hierarchy. In this paper, we present an original
method to generalize the hierarchy-based control schemes to account for unilateral
constraints at any priority level. We develop our method first for task sequencing
using only the kinematics description; then, we expand it to the task description,
using the operational space formulation. The method applies in robotics and
computer graphics animation. Its practical implementation is exemplified by
realizing a real-manipulator visual servoing task and a humanoid avatar reaching
task; both experiments are achieved under the unilateral constraints of joint
limits.
C1 [Mansard, Nicolas] Univ Toulouse, Lab Architecture & Anal Syst, Ctr Natl Rech
Sci, F-31077 Toulouse 4, France.
[Khatib, Oussama] Stanford Univ, Artificial Intelligence Lab, Dept Comp Sci,
Stanford, CA 94305 USA.
[Kheddar, Abderrahmane] Univ Montpellier 2, Lab Informat Robot & Microelect
Montpellier, Ctr Natl Rech Sci, F-34392 Montpellier 5, France.
[Kheddar, Abderrahmane] Natl Inst Adv Ind Sci & Technol, CNRS, Joint Robot Lab,
Collaborat Res Team,Int Mixed Unit 3218, Tsukuba, Ibaraki 3058568, Japan.
RP Mansard, N (reprint author), Univ Toulouse, Lab Architecture & Anal Syst, Ctr
Natl Rech Sci, F-31077 Toulouse 4, France.
EM nicolas.mansard@laas.fr; ok@cs.stanford.edu; kheddar@ieee.org
FU European Commission [FP6 034002]
FX Manuscript received January 25, 2008; revised June 13, 2008, November 7,
2008, and March 25, 2009. First published May 15, 2009; current version
published June 5, 2009. This paper was recommended for publication by
Associate Editor K. Yaniane and Editors H. Arai and K. Lynch upon
evaluation of the reviewers' comments. This work was supported in part
by the European Commission under Contract FP6 034002 ROBOT@CWE.
CR Baerlocher P, 2004, VISUAL COMPUT, V20, P402, DOI 10.1007/s00371-004-0244-4
Ben-Israel A., 2003, GEN INVERSES THEORY
Bolder B, 2007, IEEE INT CONF ROBOT, P3054, DOI 10.1109/ROBOT.2007.363936
Chaumette F, 2001, IEEE T ROBOTIC AUTOM, V17, P719, DOI 10.1109/70.964671
Chaumette F, 2006, IEEE ROBOT AUTOM MAG, V13, P82, DOI 10.1109/MRA.2006.250573
Cheah CC, 2007, IEEE T ROBOT, V23, P1260, DOI 10.1109/TRO.2007.909808
Chiaverini S, 1997, IEEE T ROBOTIC AUTOM, V13, P398, DOI 10.1109/70.585902
Choset H. M., 2005, PRINCIPLES ROBOT MOT
Comport AI, 2006, IEEE T ROBOT, V22, P416, DOI 10.1109/TRO.2006.870666
DEO AS, 1992, 1992 IEEE INTERNATIONAL CONF ON ROBOTICS AND AUTOMATION :
PROCEEDINGS, VOLS 1-3, P434, DOI 10.1109/ROBOT.1992.220301
DOTY KL, 1993, INT J ROBOT RES, V12, P1, DOI 10.1177/027836499301200101
ESPIAU B, 1992, IEEE T ROBOTIC AUTOM, V8, P313, DOI 10.1109/70.143350
Garcia-Aracil N, 2005, IEEE T ROBOT, V21, P1214, DOI 10.1109/TRO.2005.855995
GIENGER M, 2006, P IROS BEIJ CHIN OCT, P2484
GLASS K, 1995, IEEE T ROBOTIC AUTOM, V11, P448, DOI 10.1109/70.388789
Hanafusa H, 1981, P IFAC 8 TRIENN WORL, V4
Hutchinson S, 1996, IEEE T ROBOTIC AUTOM, V12, P651, DOI 10.1109/70.538972
KHATIB O, 1986, INT J ROBOT RES, V5, P90, DOI 10.1177/027836498600500106
KHATIB O, 1987, IEEE T ROBOTIC AUTOM, V3, P43, DOI 10.1109/JRA.1987.1087068
KUFFNER J, 2003, 11 INT S ROB RES SIE
LIEGEOIS A, 1977, IEEE T SYST MAN CYB, V7, P868
MALIS E, 2004, ICRA C NEW ORL LA AP
MANSARD N, 2008, P IEEE INT C ROB AUT, P3359
MANSARD N, IEEE T AUT IN PRESS
Mansard N, 2007, IEEE INT CONF ROBOT, P3041, DOI 10.1109/ROBOT.2007.363934
Mansard N, 2007, IEEE T ROBOT, V23, P60, DOI 10.1109/TRO.2006.889487
Marchand E, 1998, IEEE INT CONF ROBOT, P1988, DOI 10.1109/ROBOT.1998.680607
NAKAMURA Y, 1987, INT J ROBOT RES, V6, P3, DOI 10.1177/027836498700600201
NAKASHIMA H, 1986, Journal of Biological Rhythms, V1, P163, DOI
10.1177/074873048600100207
Padois V, 2007, ROBOTICA, V25, P157, DOI 10.1017/S0263574707003360
Park J., 2006, THESIS STANFORD U ST
Raunhardt D, 2007, IEEE INT CONF ROBOT, P4414, DOI 10.1109/ROBOT.2007.364159
REMAZEILLES A, 2006, IROS C BEIJ CHIN OCT
Samson C., 1991, ROBOT CONTROL TASK F
SENTIS L, 2005, ICRA C BARC SPAIN AP
Sentis L., 2007, THESIS STANFORD U ST
Sentis L, 2006, IEEE INT CONF ROBOT, P2641, DOI 10.1109/ROBOT.2006.1642100
SIAN N, 2007, IEEE T ROBOT, V23, P763
Siciliano B., 1991, P IEEE INT C ADV ROB, P1211
STASSE O, 2008, P IEEE INT C ROB AUT, P3200
Sugiura H., 2007, P IEEE RSJ INT C INT, P2053
Sung YW, 1996, J ROBOTIC SYST, V13, P275, DOI 10.1002/(SICI)1097-
4563(199605)13:5<275::AID-ROB2>3.0.CO;2-N
Tahri O, 2005, IEEE T ROBOT, V21, P1116, DOI 10.1109/TRO.2005.853500
YOSHIDA E, 2005, HUM C TSUK JAP NOV
YOSHIKAWA T, 1985, INT J ROBOT RES, V4, P3, DOI 10.1177/027836498500400201
NR 45
TC 81
Z9 81
U1 0
U2 7
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD JUN
PY 2009
VL 25
IS 3
SI SI
BP 670
EP 685
DI 10.1109/TRO.2009.2020345
PG 16
WC Robotics
SC Robotics
GA 456VA
UT WOS:000266878300020
DA 2018-01-22
ER

PT J
AU Dominey, PF
Mallet, A
Yoshida, E
AF Dominey, Peter Ford
Mallet, Anthony
Yoshida, Eiichi
TI REAL-TIME SPOKEN-LANGUAGE PROGRAMMING FOR COOPERATIVE INTERACTION WITH A
HUMANOID APPRENTICE
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Real-time spoken language; cooperative interaction; humanoid apprentice
ID ROBOT
AB An apprentice is an able-bodied individual that will interactively assist an
expert, and through this interaction, acquire knowledge and skill in the given task
domain. A humanoid apprentice should have a useful repertoire of sensory-motor acts
that the human can command with spoken language, along with a real-time behavioral
sequence acquisition ability. The learned sequences should function as executable
procedures that can operate in a flexible manner that are not rigidly sensitive to
initial conditions. Our study integrates these capabilities in a real-time system
on the HRP-2 humanoid, for learning a cooperative assembly task. We previously
defined a system for Spoken Language Programming (SLP) that allowed the user to
guide the robot through an arbitrary, task relevant, motor sequence via spoken
commands, and to store this sequence as re-usable macro. Here, we significantly
extend the SPL system: It integrates vision and motion planning into the SLP
framework, providing a new level of flexibility in the actions that can be created,
and it allows the user to create "generic" functions with arguments (e. g. Give me
X), and it allows multiple functions to be created.
C1 [Dominey, Peter Ford] INSERM, U846, F-69675 Lyon, France.
[Dominey, Peter Ford] CNRS, F-69675 Lyon, France.
[Mallet, Anthony] CNRS, LAAS, F-31077 Toulouse, France.
[Yoshida, Eiichi] CNRS, Natl Inst Adv Ind Sci & Technol, JRL, UMI 3218,CRT,
Tsukuba, Ibaraki 3058568, Japan.
RP Dominey, PF (reprint author), INSERM, U846, F-69675 Lyon, France.
EM Peter.Dominey@inserm.fr; anthony.mallet@laas.fr; Eiichi.yoshida@laas.fr
RI Yoshida, Eiichi/M-3756-2016; Dominey, Peter/H-3832-2012
OI Yoshida, Eiichi/0000-0002-3077-6964; Dominey, Peter/0000-0002-9318-179X
FU IST [215805]; EU [215805]; ANR
FX Supported by FP7 IST Grant 215805 CHRIS. This work was carried out in
the context of the AIST-CNRS JRL -the French/Japanese Joint Robotics
Laboratory. We, the authors, thank Jean-Paul Laumond, Co-Director of the
French-Japanese Joint Robotics Laboratory of the CNRS, for support and
comments on the ms. We also thank Hajime Saito of General Robotix for
invaluable assistance in technical aspects of the interface to OpenHRP.
Supported by EU IST Project 215805 CHRIS, and ANR Projects Comprendre
and Amorces.
CR Asoh H, 2001, IEEE INTELL SYST, V16, P46, DOI 10.1109/5254.956081
Bradski G. R., 1998, INTEL TECHNOLOGY J
CALINON S, 2006, P IEEE ICRA
Dominey PF, 2006, J COGNITIVE NEUROSCI, V18, P2088, DOI
10.1162/jocn.2006.18.12.2088
Dominey PF, 2005, ARTIF INTELL, V167, P31, DOI 10.1016/j.artint.2005.06.007
DOMINEY PF, 2003, P IEEE HUM ROB C KAR
DOMINEY PF, 2007, HUMANOID ROBOTS NEW, P185
DOMINEY PF, 2004, P IEEE RAS RSJ INT C
DOMINEY PF, 2007, P ICRA 2007 ROM
DOMINEY PF, 2005, P IEEE C HUM ROB
DOMINEY PF, 2008, IEEE RAS INT C HUM R
Goldberg AE, 2003, TRENDS COGN SCI, V7, P219, DOI 10.1016/S1364-6613(03)00080-9
Hoff B., 1992, CONTROL ARM MOVEMENT, P285
HUWEL S, 2006, P COLING ACL 2006 MA, P391
Iwahashi N, 2003, INFORM SCIENCES, V156, P109, DOI [10.1016/S0020-0255(03)00167-
1, 10.1016/S0020-0255(03)00167-0]
Kanehiro F, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1093, DOI
10.1109/IROS.2001.976314
KANEKO K, 2004, P IEEE 2004 INT C RO, V2, P1083
Kulic D, 2008, IEEE HUMANOIDS, P326
Kyriacou T, 2005, ROBOT AUTON SYST, V51, P69, DOI 10.1016/j.robot.2004.08.011
Lauria S, 2002, ROBOT AUTON SYST, V38, P171, DOI 10.1016/S0921-8890(02)00166-5
LEMON O, 2006, P EACL
MAVRIDIS N, 2006, P IEEE RSJ INT C INT
NICOLESCU MN, IEEE T SYS MAN CYBER, V31, P419
Roy D, 2004, IEEE T SYST MAN CY B, V34, P1374, DOI 10.1109/TSMCB.2004.823327
Shoemake K., 1985, ACM SIGGRAPH COMPUTE, V19, P245, DOI [DOI
10.1145/325165.325242, 10.1145/325334.325242]
ZOLLNER R, 2004, P IEEE RSJ INT C INT
NR 26
TC 15
Z9 15
U1 1
U2 3
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD JUN
PY 2009
VL 6
IS 2
BP 147
EP 171
DI 10.1142/S0219843609001711
PG 25
WC Robotics
SC Robotics
GA 472DL
UT WOS:000268109300001
DA 2018-01-22
ER

PT J
AU Nishiwaki, K
Kagami, S
AF Nishiwaki, K.
Kagami, S.
TI Online Walking Control System for Humanoids with Short Cycle Pattern
Generation
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article; Proceedings Paper
CT 10th International Symposium on Experimental Robotics (ISER)
CY JUL 06-12, 2006
CL Rio de Janeiro, BRAZIL
SP European Robot Res Network
DE humanoid robots; bipedal walking; preview control
AB The present paper presents an online walking control system that frequently
generates and updates dynamically stable motion patterns with a cycle time of 20
ms. We show that frequently updating the motion pattern contributes to maintaining
long-term balance while performing online walking control. In addition, the system
enables a robot to respond quickly to changes in the commanded walking direction.
Using preview control theory, we generate dynamically stable walking patterns. We
propose a method to adjust the future desired zero moment point (ZMP) by modifying
the foot landing position in order to maintain the dynamic balance of the generated
motion pattern. This technique can be used to filter input commands that would
result in sudden changes to the foot landing position, which would result in
dynamic instability. The method is also used to compensate for errors between the
actual and desired ZMP due to disturbances encountered while walking. We also
present an extension of the short cycle pattern generation method that can
accommodate external forces measured online. Experimental results for activities
such as pushing a table are demonstrated on the full-size humanoid HRP-2 to
evaluate the performance of the proposed walking control system.
C1 [Nishiwaki, K.; Kagami, S.] Natl Inst Adv Ind Sci & Technol, Digital Human Res
Ctr, Koto Ku, Tokyo 1350064, Japan.
RP Nishiwaki, K (reprint author), Natl Inst Adv Ind Sci & Technol, Digital Human
Res Ctr, Koto Ku, 2-41-6 Aomi, Tokyo 1350064, Japan.
EM k.nishiwaki@aist.go.jp; s.kagami@aist.go.jp
RI Kagami, Satoshi/A-6841-2013
CR CHESTNUTT J, 2004, P IEEE RAS RSJ INT C
FUKUMOTO Y, 2004, P INT C INT ROB SYST, P1186
GUTMANN JS, 2005, P INT JOINT C ART IN, P1232
HARADA K, 2004, P IEEE INT C ROB AUT, P616
Hirai K., 1997, Proceedings of the 1997 IEEE/RSJ International Conference on
Intelligent Robot and Systems. Innovative Robotics for Real-World Applications.
IROS '97 (Cat. No.97CH36108), P500, DOI 10.1109/IROS.1997.655059
Inoue K, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P2259, DOI 10.1109/ROBOT.2002.1013568
Kagami S., 2000, P IEEE INT C HUM ROB
Kajita S, 2001, IEEE INT CONF ROBOT, P2299, DOI 10.1109/ROBOT.2001.932965
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
Kaneshima Y., 2001, P 19 ANN C ROB SOC J, P987
LOFFLER K, 2003, P IEEE INT C ROB AUT, P484
Nishiwaki K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2684, DOI 10.1109/IRDS.2002.1041675
Nishiwaki K, 2001, IEEE INT CONF ROBOT, P4110, DOI 10.1109/ROBOT.2001.933260
NISHIWAKI K, 2003, P IEEE INT C ROB AUT, P911
Park I.-W., 2006, P IEEE INT C ROB AUT, P2667
SAKAGAMI Y, P IEEE RSJ INT C INT, P2478
Setiawan S, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P361, DOI 10.1109/ROBOT.1999.770005
Sugihara T., 2005, P 2005 IEEE INT C RO, P306
Takubo T, 2005, P 2005 IEEE INT C RO, P1718
TAKUBO T, 2004, P IEEE RSJ INT C INT, P1180
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
Yamaguchi J, 1998, 1998 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS - PROCEEDINGS, VOLS 1-3, P96, DOI 10.1109/IROS.1998.724603
Yokoi K., 2001, P IEEE RAS INT C HUM, P259
NR 24
TC 35
Z9 35
U1 0
U2 7
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD JUN
PY 2009
VL 28
IS 6
BP 729
EP 742
DI 10.1177/0278364908097883
PG 14
WC Robotics
SC Robotics
GA 456FO
UT WOS:000266827400005
DA 2018-01-22
ER

PT J
AU Kubo, R
Ohnishi, K
AF Kubo, Ryogo
Ohnishi, Kouhei
TI Mechanical Recognition of Unknown Environment Using Active/Passive
Contact Motion
SO IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
LA English
DT Article; Proceedings Paper
CT 9th International Workshop on Advanced Motion Control
CY MAR 27-29, 2006
CL Istanbul, TURKEY
SP IEEE Ind Elect Soc, IEEE Ind Applicat Soc, Soc Instrument & Control Engineers,
FESTO, KaleAltinay, Sabanci Univ, TUBITAK
DE Compliance control; discrete Fourier transform (DFT); disturbance
observer (DOB); environment recognition; environmental mode; haptics
ID HUMANOID ROBOT; LOCALIZATION; MANIPULATOR; INFORMATION; SYSTEMS; DESIGN
AB This paper presents a method to determine contact conditions between a planar
end-effector and the environment, i.e., face-to-face, face-to-line, or face-to-
point contact. First of all, two kinds of contact motion of a planar end-effector,
i.e., passive and active contact motions, are described. A compliance controller is
implemented to achieve the passive contact motion, and the "groping motion" is
generated as the active contact motion. Then, novel robot-friendly expressions of
the environment are proposed based on the concept of environmental modes. Discrete
Fourier transform matrices are utilized as matrices transforming environmental
information into environmental modes. In the experiments, a planar end-effector
contacts with the environment with the passive and active contact motions, and the
environmental data obtained from sensors are transformed into environmental modes.
The profiles of the extracted environmental modes are utilized to determine the
contact conditions. The validity of the proposed method is confirmed by the
experimental results.
C1 [Kubo, Ryogo; Ohnishi, Kouhei] Keio Univ, Dept Syst Design Engn, Yokohama,
Kanagawa 2238522, Japan.
RP Kubo, R (reprint author), Keio Univ, Dept Syst Design Engn, Yokohama, Kanagawa
2238522, Japan.
EM kubo@sum.sd.keio.ac.jp; ohnishi@sd.keio.ac.jp
RI Kubo, Ryogo/C-5021-2014; Ohnishi, Kouhei/A-6543-2011
CR Diolaiti N, 2005, IEEE T ROBOT, V21, P925, DOI 10.1109/TRO.2005.852261
Dollar AM, 2006, IEEE-ASME T MECH, V11, P154, DOI 10.1109/TMECH.2006.871090
Ferretti G, 2004, IEEE T ROBOTIC AUTOM, V20, P488, DOI 10.1109/TRA.2004.825472
Kageyama T., 2002, P 7 INT WORKSH ADV M, P74
Katsura S, 2004, IEEE T IND ELECTRON, V51, P221, DOI 10.1109/TIE.2003.821890
Katsura S., 2006, IEEJ T IND APPL, V126, P372
Katsura S, 2007, IEEE T IND ELECTRON, V54, P1537, DOI 10.1109/TIE.2007.894704
Katsura S, 2006, IEEE T IND ELECTRON, V53, P1688, DOI 10.1109/TIE.2006.881960
KUBO R, 2006, P 12 INT POW EL MOT, P367
KUBO R, 2006, P 9 IEEE INT WORKSH, P368
Lee D, 2006, IEEE T IND ELECTRON, V53, P1737, DOI 10.1109/TIE.2006.881949
Lefebvre T, 2005, IEEE T SYST MAN CY C, V35, P16, DOI 10.1109/TSMCC.2004.840053
Miyashita T., 2006, P IEEE RSJ INT C INT, P3468
Morita G, 2002, J ELECTROMYOGR KINES, V12, P37, DOI 10.1016/S1050-6411(01)00029-
3
Motoi N, 2007, IEEE T IND INFORM, V3, P154, DOI 10.1109/TII.2007.898469
MURAKAMI T, 1993, IEEE T IND ELECTRON, V40, P259, DOI 10.1109/41.222648
Ohashi E, 2007, IEEE T IND ELECTRON, V54, P1632, DOI 10.1109/TIE.2007.894728
Ohnishi K, 1996, IEEE-ASME T MECH, V1, P56, DOI 10.1109/3516.491410
Saeedi P, 2006, IEEE T ROBOT, V22, P119, DOI 10.1109/TRO.2005.858856
SATO H, 2000, P IEEE INT C IND EL, P2437
Senda K, 2001, 2001 IEEE INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE
IN ROBOTICS AND AUTOMATION, P444, DOI 10.1109/CIRA.2001.1013241
TAKAKURA S, 1989, P IEEE IECON NOV, V2, P421
Takeda T, 2007, IEEE T IND ELECTRON, V54, P699, DOI 10.1109/TIE.2007.891642
Tsuji T, 2007, IEEE T IND ELECTRON, V54, P3335, DOI 10.1109/TIE.2007.906175
VUKOBRATOVIC M, 1994, IEEE T IND ELECTRON, V41, P12, DOI 10.1109/41.281603
Wu CJ, 2001, J INTELL ROBOT SYST, V30, P267, DOI 10.1023/A:1008154910876
Zhu AM, 2007, IEEE T SYST MAN CY C, V37, P610, DOI 10.1109/TSMCC.2007.897499
NR 27
TC 16
Z9 16
U1 0
U2 4
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 0278-0046
EI 1557-9948
J9 IEEE T IND ELECTRON
JI IEEE Trans. Ind. Electron.
PD MAY
PY 2009
VL 56
IS 5
BP 1364
EP 1374
DI 10.1109/TIE.2008.2006936
PG 11
WC Automation & Control Systems; Engineering, Electrical & Electronic;
Instruments & Instrumentation
SC Automation & Control Systems; Engineering; Instruments & Instrumentation
GA 443CB
UT WOS:000265886700005
DA 2018-01-22
ER

PT J
AU Sakamoto, D
Hayashi, K
Kanda, T
Shiomi, M
Koizumi, S
Ishiguro, H
Ogasawara, T
Hagita, N
AF Sakamoto, Daisuke
Hayashi, Kotaro
Kanda, Takayuki
Shiomi, Masahiro
Koizumi, Satoshi
Ishiguro, Hiroshi
Ogasawara, Tsukasa
Hagita, Norihiro
TI Humanoid Robots as a Broadcasting Communication Medium in Open Public
Spaces
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Humanoid robot; Field experiment; Broadcasting information service
AB This paper reports a method that uses humanoid robots as a communication medium.
Even though many interactive robots are being developed, their interactivity
remains much poorer than that of humans due to their limited perception abilities.
In our approach, the role of interactive robots is limited to a broadcasting medium
for exploring the best way to attract people's interest to information provided by
robots. We propose using robots as a passive social medium, in which they behave as
if they are talking together. We conducted an eight-day field experiment at a train
station to investigate the effects of such a passive social medium.
C1 [Sakamoto, Daisuke] Univ Tokyo, Grad Sch Informat Sci & Technol, Dept Comp Sci,
Bunkyo Ku, Tokyo 1130033, Japan.
[Sakamoto, Daisuke; Hayashi, Kotaro; Kanda, Takayuki; Shiomi, Masahiro; Hagita,
Norihiro] ATR Intelligent Robot & Commun Labs, Keihanna Sci City, Kyoto 6190288,
Japan.
[Hayashi, Kotaro; Ogasawara, Tsukasa] Nara Inst Sci & Technol, Ikoma City, Nara
6300192, Japan.
[Koizumi, Satoshi; Ishiguro, Hiroshi] Osaka Univ, Suita, Osaka 5650871, Japan.
RP Sakamoto, D (reprint author), Univ Tokyo, Grad Sch Informat Sci & Technol, Dept
Comp Sci, Bunkyo Ku, 7-3-1 Hongo, Tokyo 1130033, Japan.
EM d.sakamoto@gmail.com; hayashi.koutarou01@is.naist.jp; kanda@atr.jp;
m-shiomi@atr.jp; satoshi@ams.eng.osaka-u.ac.jp;
ishiguro@ams.eng.osaka-u.ac.jp; ogasawar@is.naist.jp; hagita@atr.jp
RI Kanda, Takayuki/I-5843-2016; Hayashi, Kotaro/F-8824-2017
OI Kanda, Takayuki/0000-0002-9546-5825; Hayashi,
Kotaro/0000-0001-7410-1945; SHIOMI, Masahiro/0000-0003-4338-801X
FU Ministry of Internal Affairs and Communications of Japan
FX We wish to thank the staff of the Kinki Nippon Railway Co., Ltd.
('Kintetsu') for their kind cooperation. This research was supported by
the Ministry of Internal Affairs and Communications of Japan.
CR Cassell J., 1999, P SIGCHI C HUM FACT, V99, P520
Dautenhahn K, 2002, P IEEE RSJ INT C INT, V2, P1132
Gockley R., 2006, P 1 ACM SIGCHI SIGAR, P168, DOI [10.1145/1121241.1121274, DOI
10.1145/1121241.1121274]
Hayashi K., 2005, P 2005 5 IEEE RAS IN, P456
Hayashi K., 2007, P ACM IEEE INT C HUM, P137
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kanda T, 2004, HUM-COMPUT INTERACT, V19, P61, DOI 10.1207/s15327051hci1901&2_4
Kidd C. D., 2004, P INT C INT ROB SYST, V4, P3559, DOI [DOI
10.1109/IR0S.2004.1389967, 10.1109/IROS.2004.1389967]
Kozima H, 2005, P IEEE INT WORKSH RO, P341
Nakanishi H., 2003, P 2 INT JOINT C AUT, P717
Sakamoto D, 2005, INT J HUM-COMPUT ST, V62, P247, DOI
10.1016/j.ijhcs.2004.11.001
Sakamoto D., 2007, P ACM IEEE INT C HUM, P193, DOI DOI 10.1145/1228716.1228743
Sakamoto D., 2006, P 1 ACM SIGCHI SIGAR, P355
Shinozawa K, 2005, INT J HUM-COMPUT ST, V62, P267, DOI
10.1016/j.ijhcs.2004.11.003
Shiomi M., 2006, P 1 ACM SIGCHI SIGAR, P305, DOI DOI 10.1145/1121241.1121293
Siegwart R, 2003, ROBOT AUTON SYST, V42, P203, DOI 10.1016/S0921-8890(02)00376-7
Suzuki SV, 2004, IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON INTELLIGENT AGENT
TECHNOLOGY, PROCEEDINGS, P225, DOI 10.1109/IAT.2004.1342948
Takeuchi Y, 2002, T JPN SOC ARTIF INTE, V17, P439
Tanaka F, 2007, P NATL ACAD SCI USA, V104, P17954, DOI 10.1073/pnas.0707769104
Wada K, 2004, P IEEE, V92, P1780, DOI 10.1109/JPROC.2004.835378
WALSTER E, 1962, J ABNORM PSYCHOL, V65, P395, DOI 10.1037/h0041172
NR 21
TC 4
Z9 4
U1 0
U2 1
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD APR
PY 2009
VL 1
IS 2
BP 157
EP 169
DI 10.1007/s12369-009-0015-5
PG 13
WC Robotics
SC Robotics
GA V31ON
UT WOS:000208893000003
DA 2018-01-22
ER

PT J
AU Kurihara, K
Sugiyama, D
Matsumoto, S
Nishiuchi, N
Masuda, K
AF Kurihara, Kenzo
Sugiyama, Daisuke
Matsumoto, Shigeru
Nishiuchi, Nobuyuki
Masuda, Kazuaki
TI Facial emotion and gesture reproduction method for substitute robot of
remote person
SO COMPUTERS & INDUSTRIAL ENGINEERING
LA English
DT Article; Proceedings Paper
CT 35th International Conference on Computers and Industrial Engineering
CY JUN 19-22, 2005-2006
CL Istanbul, TURKEY
DE Facial emotion expression; Gesture reproduction; Substitute robot
AB CEOs of big companies may travel frequently to give their philosophies and
policies to the employees who arc working at world wide branches. Video technology
makes it possible to give their lectures anywhere and anytime in the world very
easily. However, 2-dimentional video systems lack the reality. If we can give
natural realistic lectures through humanoid robots. CEOs do not need to meet the
employees in person. They can save their time and money for traveling.
We propose a substitute robot of remote person. The substitute robot is a
humanoid robot that can reproduce the lecturers' facial expressions and body
movements, and that can send the lecturers to everywhere in the world
instantaneously with the feeling of being at a live performance. There arc two
major tasks for the development; they arc the facial expression
recognition/reproduction and the body language reproduction.
For the former task, we proposed a facial expression recognition method based
oil a neural network model. We recognized five emotions, or surprise, anger,
sadness, happiness and no emotion, in real time. We also developed a facial robot
to reproduce the recognized emotion on the robot face. Through experiments, we
showed that the robot could reproduce the speakers' emotions with its face.
For the latter task, we proposed it degradation control method to reproduce the
natural movement of the lecturer even when a robot rotary joint fails. For the
fundamental stage of our research for this sub-system, we proposed a control method
for the front view movement model, or 2-dimentional model. (C) 2008 Elsevier Ltd.
All rights reserved.
C1 [Kurihara, Kenzo; Sugiyama, Daisuke; Matsumoto, Shigeru; Masuda, Kazuaki]
Kanagawa Univ, Dept Informat Syst Creat, Kanagawa Ku, Yokohama, Kanagawa 2218686,
Japan.
[Nishiuchi, Nobuyuki] Tokyo Metropolitan Univ, Fac Syst Design, Tokyo 1910065,
Japan.
RP Kurihara, K (reprint author), Kanagawa Univ, Dept Informat Syst Creat, Kanagawa
Ku, 3-27-1 Rokkakubasal, Yokohama, Kanagawa 2218686, Japan.
EM kurihara@is.kanagawa-u.ac.jp
CR Breazeal C., 1999, P 16 INT JOINT C ART, P1146
CHOI Y, 2004, INT C ADV MECH EV FU, P46
EJIRI Y, 2003, TECHNICAL REPORT P I, V103, P13
Fujita M, 1998, AUTON ROBOT, V5, P7, DOI 10.1023/A:1008856824126
Graupe D., 2007, ADV SERIES CIRCUITS
HIRAI K, 1998, P IEEE INT C ROB AUT, V2, P1321
Ishida T, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1079, DOI
10.1109/IROS.2001.976312
Kauff P., 2002, P 4 INT C COLL VIRT, P105
Keltner D, 2000, HDB EMOTIONS, V2, P236
KIM JH, 2002, INT J HUMAN FRIENDLY, V2, P5
KOBAYASHI H, 1993, J SOC INSTRUMENT CON, V29, P112
KURIHARA K, 2003, P INT C 32 C IE, P206
Pantic M, 2000, IEEE T PATTERN ANAL, V22, P1424, DOI 10.1109/34.895976
SHIMODA H, 1999, J HUMAN INTERFACE SO, V1
Zhou CJ, 2004, ADV ROBOTICS, V18, P717, DOI 10.1163/1568553041719492
Zio E, 2007, SER QUAL RELIAB ENG, V13, P1
NR 16
TC 3
Z9 3
U1 0
U2 8
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 0360-8352
EI 1879-0550
J9 COMPUT IND ENG
JI Comput. Ind. Eng.
PD MAR
PY 2009
VL 56
IS 2
SI SI
BP 631
EP 647
DI 10.1016/j.cie.2008.10.004
PG 17
WC Computer Science, Interdisciplinary Applications; Engineering,
Industrial
SC Computer Science; Engineering
GA 416XD
UT WOS:000264037900012
DA 2018-01-22
ER

PT J
AU Solis, J
Taniguchi, K
Ninomiya, T
Yamamoto, T
Takanishi, A
AF Solis, Jorge
Taniguchi, Koichi
Ninomiya, Takeshi
Yamamoto, Tetsuro
Takanishi, Atsuo
TI Refining the flute sound production of the Waseda flutist robot the
mechanical design of the artificial organs involved during the flute
playing
SO MECHANISM AND MACHINE THEORY
LA English
DT Article
DE Humanoid robots; Human-robot interaction; Music
AB Up to now, different kinds of musical performance robots (MPRs) and robotic
musicians (RMs) have been developed. MPRs are designed to closely reproduce the
required motor skills displayed by humans (i.e. anthropomorphic robots) in order to
play musical instruments. MPRs are then usually conceived as benchmarks to study
the human motor control from an engineering point of view and to better understand
the human-robot interaction from a musical stand point. In contrast, RMs are
conceived as automated mechanisms designed for the creation of new ways of musical
expression from the musical engineering perspective. Our research on the
anthropomorphic flutist robot, at Waseda University, has been focused on clarifying
the human motor control while playing the flute, proposing novel applications for
humanoid robots and enabling the communication with humans at the emotional level
of perception. In this paper, we are presenting the details of the development of
the Waseda flutist robot No. 4 Refined IV (WF-4RIV). The WF-4RIV is composed by 41-
DOFs that reproduce the anatomy and physiology of the organs involved during the
flute playing. In particular, we describe the new mechanical design of the
artifical organs; such as lips, tonguing, vibrato and lungs, which are related to
the production of a clear sound. A set of experiments were proposed to verify the
effectiveness of each of the mechanisms to imitate the human flute playing. From
the experimental results, the WF-4RIV is able of producing clear sound with
smoother transitions between notes. (c) 2008 Elsevier Ltd. All rights reserved.
C1 [Solis, Jorge; Takanishi, Atsuo] Waseda Univ, Dept Modern Mech Engn, Tokyo
1698555, Japan.
[Taniguchi, Koichi; Ninomiya, Takeshi; Yamamoto, Tetsuro] Waseda Univ, Grad Sch
Sci & Engn, Tokyo 1698555, Japan.
[Solis, Jorge; Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst, Tokyo
1698555, Japan.
RP Solis, J (reprint author), Waseda Univ, Dept Modern Mech Engn 59 306, Shinjuku
Ku, 3-4-1 Ookubo, Tokyo 1698555, Japan.
EM solis@kurenai.waseda.jp
CR DOYON A, 1966, JACQUES VAUCANSON
GARTNER J, 1981, VIBRATO PARTICULAR C, P79
GOTO S, 2007, P 16 IEEE INT C ROB, P775
HAYASHI E, 2006, P IEEE RSJ INT C INT, P7
Kato I., 1973, P CISM IFTOMM S THEO, P12
KUWABARA H, 2006, P IEEE RSJ INT C INT, P18
Mukai S., 1992, P INT S MUS AC, P239
SHERWOOD L, HUMAN PHYSL CELLS SY, P417
Shibuya K., 2007, P 16 IEEE INT C ROB, P763
Singh HK, 2004, J ADV OXID TECHNOL, V7, P184
SOLIS J, 2005, IEEE INT C INT ROB S, P1929
SOLIS J, 2006, P IEEE RSJ INT C INT, P24
SOLIS J, 2006, P 16 CISM IFTOMM S R, P247
SOLIS J, 2005, P INT WORKSH ROB HUM, P450
SOLIS J, 2006, INT J HUM ROBOT, V30, P127
Solis J, 2007, IEEE INT CONF ROBOT, P2552, DOI 10.1109/ROBOT.2007.363849
Solis J, 2006, COMPUT MUSIC J, V30, P12, DOI 10.1162/comj.2006.30.4.12
Sugano S., 1987, Proceedings of the 1987 IEEE International Conference on
Robotics and Automation (Cat. No.87CH2413-3), P90
Takashima S., 2006, P IEEE RSJ INT C INT, P30
Weinberg G., 2007, P 16 IEEE INT C ROB, P769
NR 20
TC 14
Z9 14
U1 1
U2 10
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 0094-114X
J9 MECH MACH THEORY
JI Mech. Mach. Theory
PD MAR
PY 2009
VL 44
IS 3
BP 527
EP 540
DI 10.1016/j.mechmachtheory.2008.09.002
PG 14
WC Engineering, Mechanical
SC Engineering
GA 406KJ
UT WOS:000263293600003
DA 2018-01-22
ER

PT J
AU Inamura, T
Okada, K
Tokutsu, S
Hatao, N
Inaba, M
Inoue, H
AF Inamura, Tetsunari
Okada, Kei
Tokutsu, Satoru
Hatao, Naotaka
Inaba, Masayuki
Inoue, Hirochika
TI HRP-2W: A humanoid platform for research on support behavior in daily
life environments
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article; Proceedings Paper
CT 9th International Conference on Intelligent Autonomous Systems (IAS-9)
CY MAR 07-09, 2006
CL Univ Tokyo, Tokyo, JAPAN
HO Univ Tokyo
DE Human-robot interaction; Daily life environment; Humanoid robots;
Behavior learning and acquisition
ID ROBOT
AB We introduce a concept of a real-world-oriented humanoid robot that can support
humans' activities in daily life. In such environments, robots have to watch
humans, understand their behavior, and support their daily life tasks. In
particular, these robots must be capable of such real-world behavior as handling
tableware and delivering daily commodities by hand. We developed a humanoid robot,
HRP-2W, which has an upper body of HRP-2 [K. Kaneko, F. Kanehiro, S. Kajita, H.
Hirukawa, T. Kawasaki, M. Hirata, K. Akachi, T. Isozumi, Humanoid Robot HRP-2, in:
Proceedings of the 2004 IEEE International Conference on Robotics & Automation,
2004, pp. 1083-1090] and a wheel module instead of legs, as a research platform to
fulfill this aim, We also developed basic software configuration in order to
integrate our platform with other research groups. Through experiments, we
demonstrated the feasibility of the humanoid robot platform and the potential of
the software architecture. (C) 2008 Elsevier B.V. All rights reserved.
C1 [Inamura, Tetsunari] Grad Univ Adv Studies, Natl Inst Informat, Chiyoda Ku,
Tokyo, Japan.
[Inamura, Tetsunari; Okada, Kei; Tokutsu, Satoru; Hatao, Naotaka; Inaba,
Masayuki] Univ Tokyo, Tokyo 1138654, Japan.
[Inoue, Hirochika] Japan Soc Promot Sci, Tokyo, Japan.
RP Inamura, T (reprint author), Grad Univ Adv Studies, Natl Inst Informat, Chiyoda
Ku, 2-1-2 Hitotsubashi, Tokyo, Japan.
EM inamura@nii.ac.jp
CR Bischoff R, 2003, SPRINGER TRAC ADV RO, V5, P64
HASHIMOTO S, 2000, P IEEE RAS INT C HUM
INAMURA T, 2005, P IEEE RAS INT C HUM, P469
INAMURA T, 2003, INT C HUM ROB
Ishiguro H, 2001, IND ROBOT, V28, P498, DOI 10.1108/01439910110410051
Iwata H., 2005, P 2005 IEEE RSJ INT, P1785
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
KOJO N, 2006, P INT C INT AUT SYST, P875
Okada K., 2005, P IEEE INT C MECH AU, P1772
Thrun S, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P1999, DOI 10.1109/ROBOT.1999.770401
NR 10
TC 10
Z9 10
U1 0
U2 10
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD FEB 28
PY 2009
VL 57
IS 2
SI SI
BP 145
EP 154
DI 10.1016/j.robot.2008.10.014
PG 10
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 411YU
UT WOS:000263691200006
DA 2018-01-22
ER

PT J
AU Hyon, SH
AF Hyon, Sang-Ho
TI Compliant Terrain Adaptation for Biped Humanoids Without Measuring
Ground Surface and Contact Forces
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE Balance; compliance; contact force; humanoid robots; passivity-based
control; redundancy; terrain adaptation
ID MANIPULATORS; NEUROSCIENCE; ROBOTS
AB This paper reports the applicability of our passivity-based contact force
control framework for biped humanoids. We experimentally demonstrate its adaptation
to unknown rough terrain. Adaptation to uneven ground is achieved by optimally
distributed antigravitational forces applied to preset contact points in a
feedforward manner, even without explicitly measuring the external forces or the
terrain shape. Adaptation to unknown inclination is also possible by combining an
active balancing controller based on the center-of-mass (CoM) measurements with
respect to the inertial frame. Furthermore, we show that a simple impedance
controller for supporting the feet or hands allows the robot to adapt to low-
friction ground without prior knowledge of the ground friction. This presentation
includes supplementary experimental videos that show a full-sized biped humanoid
robot balancing on uneven ground or time-varying inclination.
C1 [Hyon, Sang-Ho] Adv Telecommun Res Inst Int ATR, Computat Neurosci Lab, Kyoto
6190288, Japan.
[Hyon, Sang-Ho] Japan Sci & Technol Agcy, Int Cooperat Res Project ICORP,
Computat Brain Project, Kawaguchi, Saitama 3320012, Japan.
RP Hyon, SH (reprint author), Adv Telecommun Res Inst Int ATR, Computat Neurosci
Lab, Kyoto 6190288, Japan.
EM sangho@atr.jp
FU National Institute of Information and Communications Technology (NICT),
Japan
FX This work was supported by the National Institute of Information and
Communications Technology (NICT), Japan.
CR Arimoto S, 2005, ASIAN J CONTROL, V7, P112
Boone GN, 1997, AUTON ROBOT, V4, P259, DOI 10.1023/A:1008891909459
Borst C, 2007, IEEE INT CONF ROBOT, P2766, DOI 10.1109/ROBOT.2007.363886
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
DELUCA A, 1999, P 3 IFAC S ROB CONTR, P93
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hirukawa H, 2007, IEEE INT CONF ROBOT, P2181, DOI 10.1109/ROBOT.2007.363644
HOGAN DP, 1985, CURRENT PERSPECTIVES, V0001
HOLLERBACH JM, 1987, IEEE T ROBOTIC AUTOM, V3, P308, DOI
10.1109/JRA.1987.1087111
HYON S, 2006, P IEEE RAS INT C HUM, P214
Hyon SH, 2007, IEEE T ROBOT, V23, P884, DOI 10.1109/TRO.2007.904896
Hyon SH, 2007, IEEE INT CONF ROBOT, P2668, DOI 10.1109/ROBOT.2007.363868
Kajita S, 2004, P IEEE RSJ INT C INT, P3546
Kawato M, 2008, PHILOS T R SOC B, V363, P2201, DOI 10.1098/rstb.2008.2272
KHATIB O, 1987, IEEE T ROBOTIC AUTOM, V3, P43, DOI 10.1109/JRA.1987.1087068
KUROKI Y, 2003, P 2003 IEEE INT C RO, P471
Murray RM, 1994, MATH INTRO ROBOTIC M
Pratt J, 2001, INT J ROBOT RES, V20, P129, DOI 10.1177/02783640122067309
Sentis L., 2005, INT J HUM ROBOT, V2, P505, DOI DOI 10.1142/S0219843605000594
Yamaguchi J., 1996, P IEEE INT C ROB AUT, V1, P232
NR 20
TC 45
Z9 46
U1 3
U2 12
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD FEB
PY 2009
VL 25
IS 1
BP 171
EP 178
DI 10.1109/TRO.2008.2006870
PG 8
WC Robotics
SC Robotics
GA 405LJ
UT WOS:000263226000017
DA 2018-01-22
ER

PT J
AU Chalodhorn, R
MacDorman, KF
Asada, M
AF Chalodhorn, Rawichote
MacDorman, Karl F.
Asada, Minoru
TI Humanoid Robot Motion Recognition and Reproduction
SO ADVANCED ROBOTICS
LA English
DT Article
DE Automatic segmentation; humanoid robot; motion learning; nonlinear
principal component analysis; pattern recognition
ID NEURAL-NETWORKS; DIMENSIONALITY
AB Humanoid robots have become appealing to the research community because of their
potential versatility. However, traditional programming approaches may not reveal
their full capabilities. Thus, an important goal is to develop a humanoid robot
that can learn to perform complex tasks by itself. This paper proposes a method to
recognize and regenerate motion in a humanoid robot. We demonstrate how a sequence
of high-dimensional motion data can be automatically segmented into abstract action
classes. The sequence from a 25-d.o.f. humanoid robot performing a ball tracking
task is reduced to its intrinsic dimensionality by nonlinear principal component
analysis (NLPCA). The motion data is then segmented automatically by incrementally
generating NLPCA networks with a circular constraint and assigning to these
networks data points according to their temporal order in a conquer-and-divide
fashion. Repeated motion patterns are removed based on their proximity to similar
motion patterns in the reduced sensorimotor space to derive a nonredundant set of
abstract actions. The networks abstracted five motion patterns without any prior
information about the number or type of motion patterns. We ensured the motion
reproduction by employing a motion optimization algorithm based on the learning of
the sensorimotor mapping in the low-dimensional space. (C) Koninklijke Brill NV,
Leiden and The Robotics Society of Japan, 2009
C1 [Chalodhorn, Rawichote] Univ Washington, Dept Comp Sci & Engn, Neural Syst Lab,
Seattle, WA 98195 USA.
[MacDorman, Karl F.] Indiana Univ, Sch Informat, Indianapolis, IN 46202 USA.
[Asada, Minoru] Osaka Univ, Grad Sch Engn, Dept Adapt Machine Syst, Suita, Osaka
5650871, Japan.
RP Chalodhorn, R (reprint author), Univ Washington, Dept Comp Sci & Engn, Neural
Syst Lab, Seattle, WA 98195 USA.
EM choppy@cs.washington.edu
CR Barbic J, 2004, PROC GRAPH INTERF, P185
Chalodhorn R, 2006, IEEE INT CONF ROBOT, P3693, DOI 10.1109/ROBOT.2006.1642266
Hinton GE, 2006, SCIENCE, V313, P504, DOI 10.1126/science.1127647
Ijspeert AJ, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P752, DOI
10.1109/IROS.2001.976259
Inamura T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1539, DOI 10.1109/ROBOT.2002.1014762
Jenkins O., 2003, P 2 INT JOINT C AUT, DOI [10.1145/860575.860612, DOI
10.1145/860575.860612]
Kirby MJ, 1996, NEURAL COMPUT, V8, P390, DOI 10.1162/neco.1996.8.2.390
KRAMER MA, 1991, AICHE J, V37, P233, DOI 10.1002/aic.690370209
LANG KJ, 1990, NEURAL NETWORKS, V3, P23, DOI 10.1016/0893-6080(90)90044-L
MacDorman KF, 1998, BEHAV BRAIN SCI, V21, P32, DOI 10.1017/S0140525X98420107
MacDorman K. F., 1999, J ROBOTICS SOC JAPAN, V17, P20
Malthouse EC, 1998, IEEE T NEURAL NETWOR, V9, P165, DOI 10.1109/72.655038
MICHEL O, 1998, LNCS LNAI, V1434, P254
Okada M, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1410, DOI 10.1109/ROBOT.2002.1014741
Saul L K, 2003, J MACHINE LEARNING R, V4, P119, DOI DOI
10.1162/153244304322972667
SMART WD, 2000, P 17 INT C MACH LEAR, P903
Sutton RS, 1998, REINFORCEMENT LEARNI
TAKAHASHI Y, 2000, IEEE RSJ INT C INT R, V1, P395
TAKAHASHI Y, 2003, P IEEE RSJ INT C INT, V1, P686
Tenenbaum JB, 2000, SCIENCE, V290, P2319, DOI 10.1126/science.290.5500.2319
Zatsiorsky V. M., 1997, KINEMATICS HUMAN MOT
NR 21
TC 8
Z9 8
U1 0
U2 2
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2009
VL 23
IS 3
BP 349
EP 366
DI 10.1163/156855308X397569
PG 18
WC Robotics
SC Robotics
GA 421XF
UT WOS:000264390900005
DA 2018-01-22
ER

PT J
AU Okada, K
Kojima, M
Tokutsu, S
Mori, Y
Maki, T
Inaba, M
AF Okada, Kei
Kojima, Mitsuharu
Tokutsu, Satoru
Mori, Yuto
Maki, Toshiaki
Inaba, Masayuki
TI Integrating Recognition and Action Through Task-Relevant Knowledge for
Daily Assistive Humanoids
SO ADVANCED ROBOTICS
LA English
DT Article; Proceedings Paper
CT 25th Annual Conference of the Robotics-Society-of-Japan
CY SEP 13-15, 2007
CL Chiba Inst Technol, Chiba, JAPAN
SP Robot Soc Japan
HO Chiba Inst Technol
DE Humanoid; robot system; visual recognition; task-relevant knowledge;
daily assistive tasks
AB This paper describes the design of a humanoid robot system that can perform
daily assistive tasks. The significant issue in realizing a daily assistive
humanoid robot system is to integrate an action subsystem and a recognition
subsystem. We have designed a task-relevant knowledge that referred from both the
action and recognition subsystems. The knowledge-based subsystem contains not only
the information on an environment and objects' shapes, but also represents
manipulation and navigation-related knowledge, and that for object recognition and
robot localization. Since motion generation and an object recognition procedure
share the same knowledge base system, the robot is capable of planning motion while
recognizing objects and vice versa. This capability increases the effectiveness and
robustness of the system. Three vision-guided behavior controls are presented
during the execution of an action: (i) a visual self-localization to recognize the
position of the robot, (ii) a visual object localization to update the object
location in the model world to generate behaviors and (iii) a visual behavior
verification to confirm the success of the motion. Finally, we demonstrated a
kitchen service task by multiple humanoid robots to prove the broad capability and
applicability of the proposed representation. The knowledge descriptions required
for this demonstration consist of 13 behaviors, six objects with manipulation and
visual feature knowledge, three search areas for the recognition process and two
task-relevant visual behavior verification knowledge bases. We found that the
description required for a kitchen service task is rather simple compared to the
complexity of the demo scenario. (c) Koninklijke Brill NV, Leiden and The Robotics
Society of Japan, 2009
C1 [Okada, Kei; Kojima, Mitsuharu; Tokutsu, Satoru; Mori, Yuto; Maki, Toshiaki;
Inaba, Masayuki] Univ Tokyo, Grad Sch Informat Sci & Technol, Bukyo Ku, Tokyo
1318656, Japan.
RP Okada, K (reprint author), Univ Tokyo, Grad Sch Informat Sci & Technol, Bukyo
Ku, 7-3-1 Hongo, Tokyo 1318656, Japan.
EM k-okada@jsk.t.u-tokyo.ac.cp
CR Asfour T., 2006, P IEEE RAS INT C HUM, P169
A Edsinger, 2006, P IEEE RSJ INT C HUM, P102
Gravot F, 2006, IEEE INT CONF ROBOT, P462, DOI 10.1109/ROBOT.2006.1641754
INOUE H, 2000, P 1 IEEE RAS INT C H
Isard M, 1998, INT J COMPUT VISION, V29, P5, DOI 10.1023/A:1008078328650
Kemp CC, 2007, IEEE ROBOT AUTOM MAG, V14, P20, DOI 10.1109/MRA.2007.339604
KITAGAWA G, 1996, J COMPUTATIONAL GRAP, V5, P1, DOI DOI 10.2307/1390750
NEO ES, 2006, P INT C ROB AUT, P4437
Okada K., 2006, P 6 IEEE RAS INT C H, P7
OKADA K, 2007, P 2007 IEEE RSJ INT, P3217
Okada K., 2005, P IEEE INT C MECH AU, P1772
Perez P., 2002, P EUR C COMP VIS, P661
Sakagami Y, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2478, DOI 10.1109/IRDS.2002.1041641
SATO T, 2006, P 9 INT C INT AUT SY, P19
Shi J., 1994, P IEEE C COMP VIS PA, P593, DOI DOI 10.1109/CVPR.1994.323794
Tomasi C., 1991, CMUCS91132
Vermaak J, 2003, NINTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS I
AND II, PROCEEDINGS, P1110
Zollner R., 2004, P IROS SEND JAP, P479
NR 18
TC 4
Z9 4
U1 0
U2 1
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2009
VL 23
IS 4
BP 459
EP 480
DI 10.1163/156855309X408835
PG 22
WC Robotics
SC Robotics
GA 421XG
UT WOS:000264391000006
DA 2018-01-22
ER

PT J
AU Kim, HD
Komatani, K
Ogata, T
Okuno, HG
AF Kim, Hyun-Don
Komatani, Kazunori
Ogata, Tetsuya
Okuno, Hiroshi G.
TI Human Tracking System Integrating Sound and Face Localization Using an
Expectation-Maximization Algorithm in Real Environments
SO ADVANCED ROBOTICS
LA English
DT Article
DE Humanoid robot; human-robot interaction; sound source localization; face
localization; human tracking
AB We have developed a human tracking system for use by robots that integrate sound
and face localization. Conventional systems usually require many microphones and/or
prior information to localize several sound sources. Moreover, they are incapable
of coping with various types of background noise. Our system, the cross-power
spectrum phase analysis of sound signals obtained with only two microphones, is
used to localize the sound source without having to use prior information such as
impulse response data. An expectation-maximization (EM) algorithm is used to help
the system cope with several moving sound sources. The problem of distinguishing
whether sounds are coming from the front or back is also solved with only two
microphones by rotating the robot's head. A developed method that uses facial skin
colors classified by another EM algorithm enables the system to detect faces in
various poses. It can compensate for the error in the sound localization for a
speaker and also identify noise signals entering from undesired directions by
detecting a human face. A developed probability-based method is used to integrate
the auditory and visual information in order to produce a reliable tracking path in
real-time. Experiments using a robot showed that our system can localize two sounds
at the same time and track a communication partner while dealing with various types
of background noise. (c) Koninklijke Brill NV, Leiden and The Robotics Society of
Japan, 2009
C1 [Kim, Hyun-Don; Komatani, Kazunori; Ogata, Tetsuya; Okuno, Hiroshi G.] Kyoto
Univ, Grad Sch Informat, Dept Intelligence Sci & Technol, Sakyo Ku, Kyoto 6068501,
Japan.
RP Kim, HD (reprint author), Kyoto Univ, Grad Sch Informat, Dept Intelligence Sci &
Technol, Sakyo Ku, Yoshida Honmachi, Kyoto 6068501, Japan.
EM hyundon@kuis.kyoto-u.ac.jp
OI Ogata, Tetsuya/0000-0001-7015-0379; Okuno, Hiroshi/0000-0002-8704-4318
FU MEXT; Grant-in-Aid for Scientific Research; Global COE program of MEXT,
Japan
FX This research was partially supported by MEXT, Grant-in-Aid for
Scientific Research, and Global COE program of MEXT, Japan.
CR Blauert J, 1996, SPATIAL HEARING PSYC
Cheng CI, 2001, J AUDIO ENG SOC, V49, P231, DOI 10.1016/S0022-5096(00)00039-9
Freund Y, 1997, J COMPUT SYST SCI, V55, P119, DOI 10.1006/jcss.1997.1504
Hara I., 2004, P IEEE RSJ INT C INT, P2404
HUA C, 2006, P 18 INT C PATT REC, P739
HUANG J, 1998, P IEEE INSTR MEAS TE, P330
Hwang S., 2005, P INT C CONTR AUT SY, P751
KIM HD, 2007, P IEEE ROMAN AUG, P399
KIM HD, 2006, P IEEE INT C ROB AUT, P1305
Lax P., 1989, SCATTERING THEORY
Moon TK, 1996, IEEE SIGNAL PROC MAG, V13, P47, DOI 10.1109/79.543975
Nakadai K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1147
Nakadai K., 2001, P INT JOINT C ART IN, P1425
NISHIURA T, 2000, P ICASSP2000, P1053
Okuno HG, 2004, INTERNATIONAL CONFERENCE ON INFORMATICS RESEARCH FOR DEVELOPMENT
OF KNOWLEDGE SOCIETY INFRASTRUCTURE, PROCEEDINGS, P73, DOI
10.1109/ICKS.2004.1313411
SCHMIDT RO, 1986, IEEE T ANTENN PROPAG, V34, P276, DOI 10.1109/TAP.1986.1143830
Tasaki T, 2004, P INT WORKSH ROB HUM, P81
VEZHNEVETS V, 2003, P GRAPH 2003 MOSC RU, P85
NR 18
TC 6
Z9 6
U1 0
U2 3
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2009
VL 23
IS 6
BP 629
EP 653
DI 10.1163/156855309X431659
PG 25
WC Robotics
SC Robotics
GA 442DW
UT WOS:000265821200001
DA 2018-01-22
ER

PT J
AU Tsetserukou, D
Kawakami, N
Tachi, S
AF Tsetserukou, Dzmitry
Kawakami, Naoki
Tachi, Susumu
TI iSoRA: Humanoid Robot Arm for Intelligent Haptic Interaction with the
Environment
SO ADVANCED ROBOTICS
LA English
DT Article; Proceedings Paper
CT 1st International Conference Intelligent Robotics and Applications
CY OCT 15-17, 2008
CL Wuhan, PEOPLES R CHINA
SP Huazhong Univ, Sci & Technol, Natl Nat Sci Fdn China, HUST, Sch Mech Sci & Engn,
State Key Lab, Digital Mfg Equipment & Technol, State Key Lab Robot, State Key Lab,
Mech Syst & Vibrat
DE Humanoid robot arm; variable admittance control; human-robot
interaction; obstacle avoidance; optical torque sensor
ID IMPEDANCE CONTROL; MANIPULATORS
AB The paper concentrates on the development and control of the humanoid robot arm
iSoRA, intended for operation in a dynamic unstructured environment. Optical torque
sensors integrated into each joint enable measurement of contacting forces along
the entire manipulator surface. A variable admittance control strategy was
elaborated to increase the robot functionality and to achieve the human-like
dynamics of interaction. The experimental results show that the proposed approach
not only provides safe interaction of the robot arm with a person, but also
improves the effectiveness of contact task performance. The paper also presents a
novel concept of avoidance of an obstacle of unknown shape. The tactile sensory
ability of the developed manipulator allows robot links to follow the object
contour and to perform motion planning in the dynamic environment. The information
on the applied normal force vector, object shape and target point coordinates is
supplied to the motion planning system. The algorithms for contact point detection,
object geometry recognition, and estimation of contacting object stiffness are
detailed. The numerical simulation elicits a capability of the proposed method to
approximate various object shapes precisely. The experimental results showed that
the local admittance control and motion planner allowed the end-effector to follow
the object contour in a very smooth, consistent manner while reaching the target
point. (C) Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2009
C1 [Tsetserukou, Dzmitry; Kawakami, Naoki; Tachi, Susumu] Univ Tokyo, Dept Informat
Phys & Comp, Bunkyo Ku, Tokyo 1138656, Japan.
RP Tsetserukou, D (reprint author), Univ Tokyo, Dept Informat Phys & Comp, Bunkyo
Ku, 7-3-1 Hongo, Tokyo 1138656, Japan.
EM dima_teterukov@ipc.i.u-tokyo.ac.jp
CR Bauer J, 1996, IROS 96 - PROCEEDINGS OF THE 1996 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS - ROBOTIC INTELLIGENCE INTERACTING
WITH DYNAMIC WORLDS, VOLS 1-3, P560, DOI 10.1109/IROS.1996.570851
Bicchi A, 2004, IEEE ROBOT AUTOM MAG, V11, P22, DOI 10.1109/MRA.2004.1310939
Caccavale F, 1999, IEEE T ROBOTIC AUTOM, V15, P289, DOI 10.1109/70.760350
CARELLI R, 1991, IEEE T AUTOMAT CONTR, V36, P967, DOI 10.1109/9.133190
Caselli S, 1996, IEEE INT CONF ROBOT, P3508, DOI 10.1109/ROBOT.1996.509247
Dubey RV, 1997, IEEE CONTR SYST MAG, V17, P37, DOI 10.1109/37.569713
FEDDEMA JT, 1994, IEEE INT CONF ROBOT, P3303, DOI 10.1109/ROBOT.1994.351062
Ferris DP, 1997, J APPL PHYSIOL, V82, P15
Hirzinger G, 2001, IEEE INT CONF ROBOT, P3356, DOI 10.1109/ROBOT.2001.933136
Hogan N., 1985, ASME, V107, P1, DOI DOI 10.1115/1.3140702
Iwata H., 2005, P 2005 IEEE RSJ INT, P1785
KANEKO M, 1992, 1992 IEEE INTERNATIONAL CONF ON ROBOTICS AND AUTOMATION :
PROCEEDINGS, VOLS 1-3, P1289, DOI 10.1109/ROBOT.1992.220071
KLATZKY RL, 1990, DEXTROUS ROBOT HANDS, P66
LOZANOPEREZ T, 1987, IEEE T ROBOTIC AUTOM, V3, P224, DOI
10.1109/JRA.1987.1087095
LUMELSKY V, 1991, 1991 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS 1-3, P797, DOI 10.1109/ROBOT.1991.131684
Lumelsky VJ, 2001, IEEE SENS J, V1, P41, DOI 10.1109/JSEN.2001.923586
Mitsunaga N., 2006, P IEEE RSJ INT C INT, P5066
Morikawa S, 2007, IEEE INT CONF ROBOT, P794, DOI 10.1109/ROBOT.2007.363083
Okamura AM, 1997, IEEE INT CONF ROBOT, P2485, DOI 10.1109/ROBOT.1997.619334
RAHMAN MM, 1999, P IEEE INT C SYST MA, P676
RAO AS, 1994, INT J ROBOT RES, V13, P16, DOI 10.1177/027836499401300102
Tsetserukou D., 2008, SENSORS FOCUS TACTIL, P15, DOI DOI 10.5772/6614
TSETSERUKOU D, 2006, J ROBOTICS MECHATRON, V18, P121
Tsumugiwa T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS I-IV, PROCEEDINGS, P644, DOI 10.1109/ROBOT.2002.1013431
VOLPE R, 1993, INT J ROBOT RES, V12, P351, DOI 10.1177/027836499301200403
WALKER ID, 1994, IEEE T ROBOTIC AUTOM, V10, P670, DOI 10.1109/70.326571
Yamaguchi F., 1988, CURVES SURFACES COMP
YOSHIKAWA T, 2008, P IEEE INT C ROB AUT, P299
NR 28
TC 6
Z9 6
U1 0
U2 8
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2009
VL 23
IS 10
BP 1327
EP 1358
DI 10.1163/156855309X462619
PG 32
WC Robotics
SC Robotics
GA 495XG
UT WOS:000269928200007
DA 2018-01-22
ER

PT J
AU Kajita, S
Sugihara, T
AF Kajita, Shuuji
Sugihara, Tomomichi
TI Humanoid Robots in the Future
SO ADVANCED ROBOTICS
LA English
DT Article
DE Humanoid robot; biped locomotion; specific resistance
C1 [Kajita, Shuuji] AIST, Tsukuba, Ibaraki 3058568, Japan.
[Sugihara, Tomomichi] Kyushu Univ, Nishi Ku, Fukuoka 8190395, Japan.
RP Kajita, S (reprint author), AIST, 1-1-1 Umezono, Tsukuba, Ibaraki 3058568,
Japan.
EM s.kajita@aist.go.jp
RI Kajita, Shuuji/M-5010-2016
OI Kajita, Shuuji/0000-0001-8188-2209
CR Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
MCNEILL R, 1992, EXPLORING BIOMECHANI
*MIN EC TRAD IND, 2008, AD ROADM ROB TECHN 2
*MIN EC TRAD IND, 2007, AC ROADM ROB TECHN 2
NR 4
TC 2
Z9 2
U1 0
U2 3
PU VSP BV
PI LEIDEN
PA BRILL ACADEMIC PUBLISHERS, PO BOX 9000, 2300 PA LEIDEN, NETHERLANDS
SN 0169-1864
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2009
VL 23
IS 11
BP 1527
EP 1531
DI 10.1163/016918609X12469692711804
PG 5
WC Robotics
SC Robotics
GA 496KS
UT WOS:000269973400019
DA 2018-01-22
ER

PT J
AU Solis, J
Taniguchi, K
Ninomiya, T
Petersen, K
Yamamoto, T
Takanishi, A
AF Solis, Jorge
Taniguchi, Koichi
Ninomiya, Takeshi
Petersen, Klaus
Yamamoto, Tetsuro
Takanishi, Atsuo
TI Implementation of an Auditory Feedback Control System on an
Anthropomorphic Flutist Robot Inspired on the Performance of a
Professional Flutist
SO ADVANCED ROBOTICS
LA English
DT Article; Proceedings Paper
CT 17th IEEE/RSJ International Symposium on Robot and Human Interactive
Communication
CY 2008
CL Tech Univ Munchen, Munich, GERMANY
SP IEEE, RSJ
HO Tech Univ Munchen
DE Humanoid; air pressure control; neural networks; feedback error
learning; music
ID MUSIC; RULES; MODEL
AB Up to now, different kinds of musical performance robots (MPRs) have been
developed. MPRs are designed to closely reproduce the required motor skills
displayed by humans in order to play musical instruments. Our research at Waseda
University has been focused on developing an anthropomorphic flutist robot. As a
result of our research, the Waseda Flutist Robot No. 4 Refined IV (WF-4RIV) has
been designed to mechanically reproduce the human organs involved during a flute-
playing performance. Although the WF-4RIV is able to play the flute nearly similar
to the performance of an intermediate player, further improvements in terms of
cognitive capabilities are still required (i.e., autonomously improve the quality
of the sound during performance based on the sound processing). For this purpose,
in this paper, we present the implementation of an Auditory Feedback Control System
(AFCS) designed to enable the flutist robot to analyze the flute sound during a
performance (in a similar way professional flutists practice before a performance
is held). The AFCS is composed of three subsystems (which are detailed in this
paper): WF-4RIV's Control System, Expressive Music Generator and Pitch Evaluation
System. A set of experiments was proposed to verify the effectiveness of the
proposed AFCS to control the air pressure and to detect/correct faulty notes during
a performance. From the experimental results, we confirm the improvements on the
flute sound produced by the robot. (C) Koninklijke Brill NV, Leiden and The
Robotics Society of Japan, 2009
C1 [Solis, Jorge; Takanishi, Atsuo] Waseda Univ, Fac Sci & Engn, Shinjuku Ku, Tokyo
1896555, Japan.
[Solis, Jorge; Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst, Shinjuku Ku,
Tokyo 1896555, Japan.
[Taniguchi, Koichi; Ninomiya, Takeshi; Petersen, Klaus; Yamamoto, Tetsuro]
Waseda Univ, Grad Sch Adv Sci & Engn, Shinjuku Ku, Tokyo 1896555, Japan.
RP Solis, J (reprint author), Waseda Univ, Fac Sci & Engn, Shinjuku Ku, 3-4-1
Ookubo, Tokyo 1896555, Japan.
EM solis@ieee.org
CR Ando Y., 1970, J ACOUST SOC JAPAN, V26, P297
Arcos JL, 2001, APPL INTELL, V14, P115, DOI 10.1023/A:1008311209823
Bishop CM, 2004, NEURAL NETWORKS PATT
BRESIN R, 1995, P 11 CIM C MUS INF, P163
DEVAUCANSON J, 1979, FLUTE LIB, V5, P1
FRIBERG A, 1991, COMPUT MUSIC J, V15, P49, DOI 10.2307/3680916
Friberg A., 1995, THESIS ROYAL I TECHN
GADE S, 1982, BRUEL KJAER TECHNICA, V3
GERHARD David, 2003, PITCH EXTRACTION FUN
GOTO S, 2007, P 16 IEEE INT C ROB, P775
ISHIKAWA O, 2000, P 2000 INT COMP MUS, P348
ISODA S, 2003, P IEEE INT C ROB AUT, P3582
KATAYOSE H, 1993, COMPUT HUMANITIES, V27, P31, DOI 10.1007/BF01830715
Kato I., 1973, P CISM IFTOMM S THEO, P12
KAWATO M, 1987, BIOL CYBERN, V57, P169, DOI 10.1007/BF00364149
KAWATO M, 1992, BIOL CYBERN, V68, P95, DOI 10.1007/BF00201431
KUWABARA H, 2006, P IEEE RSJ INT C INT, P18
PETERSEN K, 2008, P 18 IEEE INT C ROB, P780
Shibuya K., 2007, P 16 IEEE INT C ROB, P763
SINGER E, 2004, P C NEW INT MUS EXPR, P183
Solis J., 2004, P IEEE RAS INT C ROB, P146
SOLIS J, 2007, P INT C COMP MUS, P423
SOLIS J, 2007, P 16 IEEE INT C ROB, P780
SOLIS J, 2006, P 1 IEEE RAS EMBS IN, P1024
SOLIS J, 2008, P 17 CISM IFTOMM S R, P217
Solis J., 2008, P 2 IEEE RAS EMBS IN, P428
SOLIS J, 2007, P IEEE RSJ INT C INT, P2041
SUZUKI T, 1999, P 16 INT JOINT C ART, P642
Takashima S., 2006, P IEEE RSJ INT C INT, P30
Weinberg G., 2007, P 16 IEEE INT C ROB, P769
NR 30
TC 12
Z9 12
U1 0
U2 6
PU VSP BV
PI LEIDEN
PA BRILL ACADEMIC PUBLISHERS, PO BOX 9000, 2300 PA LEIDEN, NETHERLANDS
SN 0169-1864
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2009
VL 23
IS 14
BP 1849
EP 1871
DI 10.1163/016918609X12518783330207
PG 23
WC Robotics
SC Robotics
GA 514SB
UT WOS:000271414800003
DA 2018-01-22
ER

PT J
AU Nishide, S
Ogata, T
Tani, J
Komatani, K
Okuno, HG
AF Nishide, Shun
Ogata, Tetsuya
Tani, Jun
Komatani, Kazunori
Okuno, Hiroshi G.
TI Self-organization of Dynamic Object Features Based on Bidirectional
Training
SO ADVANCED ROBOTICS
LA English
DT Article
DE Bidirectional training; neural networks; affordance; active sensing;
humanoid robots
ID ROBOT; MANIPULATION; EXPERIENCE; IMITATION; VISION
AB This paper presents a method to self-organize object features that describe
object dynamics using bidirectional training. The model is composed of a dynamics
learning module and a feature extraction module. Recurrent Neural Network with
Parametric Bias (RNNPB) is utilized for the dynamics learning module, learning and
self-organizing the sequences of robot and object motions. A hierarchical neural
network is linked to the input of RNNPB as the feature extraction module for self-
organizing object features that describe the object motions. The two modules are
simultaneously trained through bidirectional training using image and motion
sequences acquired from the robot's active sensing with objects. Experiments are
performed with the robot's pushing motion with a variety of objects to generate
sliding, falling over, bouncing and rolling motions. The results have shown that
the model is capable of self-organizing object dynamics based on the self-organized
features. (C) Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2009
C1 [Nishide, Shun; Ogata, Tetsuya; Komatani, Kazunori; Okuno, Hiroshi G.] Kyoto
Univ, Grad Sch Informat, Dept Intelligence Sci & Technol, Kyoto 6068501, Japan.
[Tani, Jun] RIKEN, Brain Sci Inst, Wako, Saitama 3510198, Japan.
RP Nishide, S (reprint author), Kyoto Univ, Grad Sch Informat, Dept Intelligence
Sci & Technol, Yoshida Honmachi, Kyoto 6068501, Japan.
EM nishide@kuis.kyoto-u.ac.jp
OI Ogata, Tetsuya/0000-0001-7015-0379; Okuno, Hiroshi/0000-0002-8704-4318
FU Global COE; Ministry of Education, Science, Sports and Culture;
Scientific Research; Young Scientists; Exploratory Research; JSPS
Fellows; RIKEN; Kayamori Foundation of Informational Science Advancement
FX This research was partially supported by Global COE, the Ministry of
Education, Science, Sports and Culture, Grant-in-Aid for Scientific
Research (S), Grant-in-Aid for Young Scientists (A), Grant-in-Aid for
Exploratory Research, Grant-in-Aid for JSPS Fellows, RIKEN, and Kayamori
Foundation of Informational Science Advancement.
CR BAJCSY R, 1988, P IEEE, V76, P996
BORGA M, 2001, P 9 EUR S ART NEUR N, P309
Buessler JL, 2002, ADV ROBOTICS, V16, P297, DOI 10.1163/156855302760121954
BUESSLER JL, 2003, BIOL INSPIRED ROBOT, P261
Fitzpatrick P, 2003, PHILOS T R SOC A, V361, P2165, DOI 10.1098/rsta.2003.1251
Gibson J. J., 1979, ECOLOGICAL APPROACH
Hermann G, 2005, LECT NOTES ARTIF INT, V3575, P333
Ishiguro H, 2001, IND ROBOT, V28, P498, DOI 10.1108/01439910110410051
Jordan M., 1986, P 8 ANN C COGN SCI S, P513
Kim D., 2006, P IEEE INT C ROB AUT, P518
LIAO W, 2008, P INT C IM SIGN PROC, P763
Metta G, 2003, ADAPT BEHAV, V11, P109, DOI 10.1177/10597123030112004
Miyamoto H, 1998, NEURAL NETWORKS, V11, P1331, DOI 10.1016/S0893-6080(98)00062-8
Miyamoto H, 1996, NEURAL NETWORKS, V9, P1281, DOI 10.1016/S0893-6080(96)00043-3
Montesano L, 2008, IEEE T ROBOT, V24, P15, DOI 10.1109/TRO.2007.914848
NISHIDE S, 2008, P IEEE INT C ROB AUT, P1608
Nishide S, 2008, ADV ROBOTICS, V22, P527, DOI 10.1163/156855308X294879
PETTIGREW JD, 1974, J PHYSIOL-LONDON, V237, P49, DOI
10.1113/jphysiol.1974.sp010469
RUMELHART DE, 1986, PARALLEL DISTRIBUTED, P318
Shaw R. E., 1974, STUDIES PERCEPTION E, P276
STOYTCHEV A, 2005, P IEEE INT C ROB AUT, P3060
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
Ugur E, 2007, IEEE INT CONF ROBOT, P1721, DOI 10.1109/ROBOT.2007.363571
Yokoya R, 2007, ADV ROBOTICS, V21, P1351, DOI 10.1163/156855307781746106
ZENG X, 2002, P IEEE INT C PATT RE, P228
NR 25
TC 3
Z9 3
U1 0
U2 1
PU VSP BV
PI LEIDEN
PA BRILL ACADEMIC PUBLISHERS, PO BOX 9000, 2300 PA LEIDEN, NETHERLANDS
SN 0169-1864
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2009
VL 23
IS 15
BP 2035
EP 2057
DI 10.1163/016918609X12529289797027
PG 23
WC Robotics
SC Robotics
GA 533ES
UT WOS:000272807100004
DA 2018-01-22
ER

PT J
AU Kim, HD
Kim, J
Komatani, K
Ogata, T
Okuno, HG
AF Kim, Hyun-Don
Kim, Jinsung
Komatani, Kazunori
Ogata, Tetsuya
Okuno, Hiroshi G.
TI Target Speech Detection and Separation for Communication with Humanoid
Robots in Noisy Home Environments
SO ADVANCED ROBOTICS
LA English
DT Article
DE Robot audition; sound source localization; sound source separation;
voice activity detection; human-robot interaction
ID REDUCTION; SYSTEM
AB People usually talk face to face when they communicate with their partner.
Therefore, in robot audition, the recognition of the front talker is critical for
smooth interactions. This paper presents an enhanced speech detection method for a
humanoid robot that can separate and recognize speech signals originating from the
front even in noisy home environments. The robot audition system consists of a new
type of voice activity detection (VAD) based on the complex spectrum circle
centroid (CSCC) method and a maximum signal-to-noise ratio (SNR) beamformer. This
VAD based on CSCC can classify speech signals that are retrieved at the frontal
region of two microphones embedded on the robot. The system works in real-time
without needing training filter coefficients given in advance even in a noisy
environment (SNR > 0 dB). It can cope with speech noise generated from televisions
and audio devices that does not originate from the center. Experiments using a
humanoid robot, SIG2, with two microphones showed that our system enhanced
extracted target speech signals more than 12 dB (SNR) and the success rate of
automatic speech recognition for Japanese words was increased by about 17 points.
(C) Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2009
C1 [Kim, Hyun-Don; Komatani, Kazunori; Ogata, Tetsuya; Okuno, Hiroshi G.] Kyoto
Univ, Dept Intelligence Sci & Technol, Grad Sch Informat, Sakyo Ku, Kyoto 6068501,
Japan.
[Kim, Jinsung] Korea Inst Sci & Technol, Ctr Cognit Robot Res, Seoul 136650,
South Korea.
RP Kim, HD (reprint author), Kyoto Univ, Dept Intelligence Sci & Technol, Grad Sch
Informat, Sakyo Ku, Yoshida Honmachi, Kyoto 6068501, Japan.
EM hyundon@kuis.kyoto-u.ac.jp
OI Okuno, Hiroshi/0000-0002-8704-4318; Ogata, Tetsuya/0000-0001-7015-0379
FU MEXT; Global COE program of MEXT, Japan
FX This research was partially supported by MEXT, Grant-in-Aid for
Scientific Research, and Global COE program of MEXT, Japan.
CR Araki S., 2007, P IEEE INT C AC SPEE, P41
BAHOURA M, 2004, P 26 INT C IEEE EMBS, P9
Brandstein M., 2001, MICROPHONE ARRAYS
Cichocki A, 1997, ELECTRON LETT, V33, P64, DOI 10.1049/el:19970060
Hara I., 2004, P IEEE RSJ INT C INT, P2404
Hoffman MW, 2001, IEEE T SPEECH AUDI P, V9, P175, DOI 10.1109/89.902284
HUANG J, 1997, P IEEE RSJ INT C INT, P683
*ITUT, 1996, G792 ITUT
KIM HD, 2008, P IEEE INT C ROB AUT, P3495
LEBOUQUINJEANNES R, 1995, SPEECH COMMUN, V16, P245, DOI 10.1016/0167-
6393(94)00056-G
Lu L, 2002, IEEE T SPEECH AUDI P, V10, P504, DOI 10.1109/TSA.2002.804546
Nakadai K., 2001, P INT JOINT C ART IN, P1425
Ohkubo T, 2007, J VLSI SIG PROC SYST, V46, P123, DOI 10.1007/s11265-006-0032-7
TAKEDA R, 2006, P IEEE RSJ INT C INT, P878
Valin J.-M., 2004, P IEEE RSJ INT C INT, P2123
VANTREES HL, 2002, OPTIMUM ARRAYS PROCE
NR 16
TC 1
Z9 1
U1 1
U2 6
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0169-1864
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2009
VL 23
IS 15
BP 2093
EP 2111
DI 10.1163/016918609X12529300552105
PG 19
WC Robotics
SC Robotics
GA 533ES
UT WOS:000272807100007
DA 2018-01-22
ER

PT J
AU Motoi, N
Suzuki, T
Ohnishi, K
AF Motoi, Naoki
Suzuki, Tomoyuki
Ohnishi, Kouhei
TI A Bipedal Locomotion Planning Based on Virtual Linear Inverted Pendulum
Mode
SO IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
LA English
DT Article; Proceedings Paper
CT IEEE International Conference on Industrial Technology
CY DEC 15-17, 2006
CL Bombay, INDIA
SP IEEE, IEEE Ind Elect Soc
DE Biped robot; humanoid robot; legged locomotion; motion control; walking
motion
ID MOTION CONTROL; ELECTRIC VEHICLE; HUMANOID ROBOT; SYSTEM; DESIGN; FORCE;
GAIT
AB In this paper, a bipedal locomotion planning based on virtual linear inverted
pendulum mode (VLIPM) is proposed. In conventional methods, the desired center of
gravity (COG) position and velocity are achieved by modifying the foot placement.
In this research, the desired COG position and velocity are achieved while the
desired foot placement is also realized. In the proposed method, the virtual
modified foot placement and trajectory planning are calculated separately. VLIPM is
applied to the calculation of the virtual modified foot placement. By using virtual
supporting point (VSP), the difference between the virtual modified and desired
foot placements is compensated. In the result, the desired foot placement is
achieved as if the foot placement is in the virtual modified foot placement.
Trajectory planning is applied to LIPM with VSP and 5-D polynomial. The boundary
conditions of the polynomial are set to the desired COG position and velocity. In
the result, the desired COG position and velocity are also obtained. Differences of
the motion by different models are compensated by matching the boundary conditions
of different models. By applying different models in the calculations of the foot
placement and trajectory planning, the desired robot motion is realized. The
walking stability of the proposed method is equivalent to that of the conventional
method. The effectiveness of the proposed method is confirmed by a simulation and
an experiment.
C1 [Motoi, Naoki] Toyota Motor Co, Toyota 4718571, Japan.
[Motoi, Naoki; Suzuki, Tomoyuki; Ohnishi, Kouhei] Keio Univ, Dept Syst Design
Engn, Yokohama, Kanagawa 2238522, Japan.
RP Motoi, N (reprint author), Toyota Motor Co, Toyota 4718571, Japan.
EM motoi@sum.sd.keio.ac.jp; tomoyuki@sum.sd.keio.ac.jp;
ohnishi@sd.keio.ac.jp
RI Ohnishi, Kouhei/A-6543-2011
CR Fu CL, 2008, IEEE T IND ELECTRON, V55, P2111, DOI 10.1109/TIE.2008.921205
Harada K, 2007, IEEE-ASME T MECH, V12, P53, DOI 10.1109/TMECH.2006.886254
Horng RH, 2006, IEEE T IND ELECTRON, V53, P1770, DOI 10.1109/TIE.2006.885119
Kadowaki S, 2007, IEEE T IND ELECTRON, V54, P2001, DOI 10.1109/TIE.2007.895135
KAJITA S, 1992, IEEE T ROBOTIC AUTOM, V8, P431, DOI 10.1109/70.149940
Kajita S., 2001, P IEEE RSJ INT C INT, V1, P239
Katsura S, 2007, IEEE T IND ELECTRON, V54, P1537, DOI 10.1109/TIE.2007.894704
Lin FJ, 2006, IEEE T IND ELECTRON, V53, P1209, DOI 10.1109/TIE.2006.878312
Loffler K, 2004, IEEE T IND ELECTRON, V51, P972, DOI 10.1109/TIE.2004.834948
Minakata H, 2008, IEEE T IND ELECTRON, V55, P1271, DOI 10.1109/TIE.2007.911919
Moreno J, 2006, IEEE T IND ELECTRON, V53, P614, DOI 10.1109/TIE.2006.870880
MORISAWA M, 2000, P 6 INT WORKSH ADV M, P537
Mutoh N, 2006, IEEE T IND ELECTRON, V53, P803, DOI 10.1109/TIE.2006.874271
NAGASAKI T, 2003, J ROBOTICS SOC JAPAN, V21, P902
Ohashi E, 2007, IEEE T IND ELECTRON, V54, P1632, DOI 10.1109/TIE.2007.894728
Ohnishi K, 1996, IEEE-ASME T MECH, V1, P56, DOI 10.1109/3516.491410
Tsuji T., 2002, P 7 INT WORKSH ADV M, P478
Tsuji T, 2007, IEEE T IND ELECTRON, V54, P3335, DOI 10.1109/TIE.2007.906175
NR 18
TC 31
Z9 32
U1 2
U2 11
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855 USA
SN 0278-0046
J9 IEEE T IND ELECTRON
JI IEEE Trans. Ind. Electron.
PD JAN
PY 2009
VL 56
IS 1
BP 54
EP 61
DI 10.1109/TIE.2008.2004663
PG 8
WC Automation & Control Systems; Engineering, Electrical & Electronic;
Instruments & Instrumentation
SC Automation & Control Systems; Engineering; Instruments & Instrumentation
GA 392UA
UT WOS:000262326500008
DA 2018-01-22
ER

PT J
AU Ogure, T
Nakabo, Y
Jeong, S
Yamada, Y
AF Ogure, Takuya
Nakabo, Yoshihiro
Jeong, SeongHee
Yamada, Yoji
TI Hazard analysis of an industrial upper-body humanoid
SO INDUSTRIAL ROBOT-AN INTERNATIONAL JOURNAL
LA English
DT Article
DE Robotics; Plant safety; Hazards
ID SAFETY ASSESSMENT; ROBOT SYSTEMS; RISK ANALYSIS
AB Purpose - The purpose of this paper is to clarify the underlying hazards of
human-mimic human-collaborative industrial robots.
Design/methodology/approach - Preliminary hazard analysis is applied to a new
industrial upper-body-humanoid under development. The result of the analysis is
summarized by Fishbone diagram analysis.
Findings - Six hazard categories involving a four-class physical human robot
interaction hazard classification are derived from the analysis.
Originality/value - The method of analyzing hazards presented here and the
hazard theory derived from the analysis can be used in other developmental
projects.
C1 [Ogure, Takuya; Nakabo, Yoshihiro; Jeong, SeongHee; Yamada, Yoji] Natl Inst Adv
Ind Sci & Technol, Intelligent Syst Res Inst, Tsukuba, Ibaraki, Japan.
RP Ogure, T (reprint author), Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res
Inst, Tsukuba, Ibaraki, Japan.
EM ogure.takuya@aist.go.jp
RI Nakabo, Yoshihiro/N-6147-2016; Ogure, Takuya/N-5304-2016
OI Nakabo, Yoshihiro/0000-0002-6450-2099; Ogure, Takuya/0000-0002-0722-0263
FU NEDO of Japan
FX This study was supported by Project for Strategic Development of
Advanced Robotics Elemental Technologies in 2008 from NEDO of Japan.
CR ALAMI R, 2006, P WORKSH PHYS HUM RO
ALBUSCHAEFFER A, 2005, P 4 IARP IEEE RAS EU
CAVALLARO JR, 1994, P AIAA NASA C INT RO, P282
Chang JI, 2006, J LOSS PREVENT PROC, V19, P51, DOI 10.1016/j.jlp.2005.05.015
Ericson CA, 2005, HAZARD ANALYSIS TECHNIQUES FOR SYSTEM SAFETY, P1, DOI
10.1002/0471739421
Goldberg B., 1994, NASA REFERENCE PUBLI, V1358
HADDADIN S, 2007, P INT S ROB RES ISRR
*INT ORG STAND, 2007, ISO1412112007
*INT ORG STAND, 2006, ISO1021812006
*INT ORG STAND, 2003, ISO1210022003
JIANG BC, 1990, INT J IND ERGONOM, V6, P95
Khodabandehloo K, 1996, RELIAB ENG SYST SAFE, V53, P247, DOI 10.1016/S0951-
8320(96)00052-X
Kochan A, 2006, IND ROBOT, V33, P422, DOI 10.1108/01439910610705572
Korayem MH, 2008, ROBOT CIM-INT MANUF, V24, P472, DOI 10.1016/j.rcim.2007.05.003
Korb W, 2005, MINIM INVASIV THER, V14, P23, DOI 10.1080/13645700510010827
Korb W, 2003, INT CONGR SER, V1256, P766, DOI 10.1016/S0531-5131(03)00402-3
Pervez A, 2008, J MECH SCI TECHNOL, V22, P469, DOI 10.1007/s12206-007-1109-3
RAHIMI M, 1986, J OCCUP ACCID, V8, P127, DOI 10.1016/0376-6349(86)90035-0
SATO Y, 1987, JSME INT J, V30, P350, DOI 10.1299/jsme1987.30.350
Stephans RA, 2004, SYSTEM SAFETY 21ST C
*US DEP DEF, 2000, MILSTD882D DEP DEF H
Walker ID, 1996, RELIAB ENG SYST SAFE, V53, P277, DOI 10.1016/S0951-
8320(96)00055-5
WARD GR, 1995, IND ROBOT, V22, P10, DOI 10.1108/EUM0000000004173
NR 23
TC 4
Z9 4
U1 0
U2 2
PU EMERALD GROUP PUBLISHING LIMITED
PI BINGLEY
PA HOWARD HOUSE, WAGON LANE, BINGLEY BD16 1WA, W YORKSHIRE, ENGLAND
SN 0143-991X
EI 1758-5791
J9 IND ROBOT
JI Ind. Robot
PY 2009
VL 36
IS 5
BP 469
EP 476
DI 10.1108/01439910910980196
PG 8
WC Engineering, Industrial; Robotics
SC Engineering; Robotics
GA 490PW
UT WOS:000269514500008
DA 2018-01-22
ER
PT J
AU Sakaguchi, T
Yokoi, K
Ujiie, T
Tsunoo, S
Wada, K
AF Sakaguchi, T.
Yokoi, K.
Ujiie, T.
Tsunoo, S.
Wada, K.
TI DESIGN OF COMMON ENVIRONMENTAL INFORMATION FOR DOOR-CLOSING TASKS WITH
VARIOUS ROBOTS
SO INTERNATIONAL JOURNAL OF ROBOTICS & AUTOMATION
LA English
DT Article
DE Ambient intelligence; common environmental information; task
information; door-closing task; environmental structuralization
AB A large amount of information involving human beings is available on the
Internet; however, only little information that involves robots is available. We
believe that having ambient intelligence provides helpful information for robots
under ubiquitous computing technologies. The purpose of our research is to describe
common environmental information in order to enable a robotic system to execute
target tasks in a variety of ambient conditions. This system can correspond to
various robots and various objects when we design the task information, which does
not depend on the types and structures of these robots. In this paper, we focus cm
the door-closing task performed by robots among various other tasks and propose a
robot control method that integrates the ambient intelligence. The effectiveness of
the proposed system is experimentally confirmed using a refrigerator with two types
of doom a door status sensor, which we developed: a humanoid robot; and a mobile
manipulator.
C1 [Sakaguchi, T.; Yokoi, K.] AIST, JRL, Tsukuba, Ibaraki 3058568, Japan.
[Ujiie, T.; Tsunoo, S.; Wada, K.] Tokyo Metropolitan Univ, Grad Sch Syst Design,
Tokyo 1910065, Japan.
RP Sakaguchi, T (reprint author), AIST, JRL, Cent 2,1-1-1 Umezono, Tsukuba, Ibaraki
3058568, Japan.
EM sakaguchi.t@aist.go.jp; kazuhito.yokoi@aist.go.jp;
takeshi-ujiie@sd.tmu.ac.jp; tsunoo-shinichi@sd.tmu.ac.jp;
k_wada@sd.tmu.ac.jp
RI Yokoi, Kazuhito/K-2046-2012; Sakaguchi, Takeshi/M-4024-2016
OI Yokoi, Kazuhito/0000-0003-3942-2027; Sakaguchi,
Takeshi/0000-0002-2726-7448
CR ALANKUS G, 2005, P IEEE RSJ INT C INT, P1146
Alankus G, 2007, IEEE INT CONF ROBOT, P3645, DOI 10.1109/ROBOT.2007.364037
ASAMA H, 2005, P INT C CONTR AUT SY, P21
Chong N. Y., 2004, P IEEE RSJ INT C INT, P187
Fukuda T., 2005, J ROBOTICS MECHATRON, V17, P717
Fukuda Tsukasa, 2005, P IEEE INT C ROB AUT, P2908
HA YG, 2005, P IEEE RSJ INT C INT, P413
HADA Y, 2005, J ROBOTICS SOC JAPAN, V23, P683
HASEGAWA T, 2007, P 4 INT C UB ROB AMB, P369
HASHIMOTO H, 2001, P 32 INT S ROB SEOUL, P808
HIRATSUKA S, 2004, P IEEE RSJ INT C INT, P559
IRIE K, 2004, P IEEE RSJ INT C INT, P193
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
Katsuki R, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P130
KODAKA K, 2007, P 8 SICE INT DIV ANN, P891
Koide Y., 2004, P 2004 IEEE RSJ INT, P2500
Miyauchi T, 2007, 2007 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND
AUTOMATION, VOLS I-V, CONFERENCE PROCEEDINGS, P1411, DOI 10.1109/ICMA.2007.4303756
Moon Tae-Kyung, 2004, P IEEE RSJ INT C INT, P565
Morioka K., 2006, P 2006 IEEE RSJ INT, P2644
Morioka K., 2004, P 2004 IEEE RSJ INT, P199
Nagaraj P, 2000, INFECT MED, V17, P208
NGUYEN DT, 2006, P IEEE RSJ INT C INT, P4798
Ota J, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS 1-4, PROCEEDINGS, P2976, DOI 10.1109/ROBOT.1999.774049
SAKAGUCHI T, 2007, P 4 INT C UB ROB AMB, P383
SASAKI T, 2007, P 8 SICE SYST INT DI, P121
Sato T, 2004, IEEE-ASME T MECH, V9, P529, DOI 10.1109/TMECH.2004.834650
SHENOY S, 2005, P IEEE RSJ INT C INT, P1518
SHIBATA M, 2006, P JSME C ROB MECH TO
SUGAWARA T, 2007, P INT C CONTR AUT SY, P2473
TANIE K, 2005, P 2 INT C UB ROB AMB, P5
NR 30
TC 1
Z9 1
U1 0
U2 3
PU ACTA PRESS
PI CALGARY
PA 2509 DIEPPE AVE SW, BLDG B6, STE 101, CALGARY, AB T3E 7J9, CANADA
SN 0826-8185
EI 1925-7090
J9 INT J ROBOT AUTOM
JI Int. J. Robot. Autom.
PY 2009
VL 24
IS 3
BP 203
EP 213
PG 11
WC Automation & Control Systems; Robotics
SC Automation & Control Systems; Robotics
GA 479FE
UT WOS:000268644300006
DA 2018-01-22
ER

PT J
AU Osawa, H
Ohmura, R
Imai, M
AF Osawa, Hirotaka
Ohmura, Ren
Imai, Michita
TI Using Attachable Humanoid Parts for Realizing Imaginary Intention and
Body Image
SO INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS
LA English
DT Article
DE Human-robot interaction; Human-agent interaction; Human interface;
Anthropomorphization
AB We propose a new approach to human-robot interaction (HRI) in which a common
target object is anthropomorphized using attachable humanoid parts. With this
approach, the user perceives that the target has an intention and senses what the
imaginary body of the target looks like through the attached body parts. We
experimented how users accepted the intentions and imaginary body image of a common
target object using the humanoid parts. We also compared the resulting HRI with
that in the case of the general communication robot Robovie to demonstrate the
possibilities our proposed method offers. The results indicated that an
anthropomorphized target object can interact with users through this approach.
Furthermore, in comparison to interaction with an independent communication robot,
with this approach, users remembered the functions of the anthropomorphized target
to a greater extent and were more intimate with it.
C1 [Osawa, Hirotaka] Keio Univ, Grad Sch Sci & Technol, Yokohama, Kanagawa 223,
Japan.
[Ohmura, Ren; Imai, Michita] Keio Univ, Fac Sci & Technol, Yokohama, Kanagawa
223, Japan.
RP Osawa, H (reprint author), Keio Univ, Grad Sch Sci & Technol, Keio Yagami Campus
25-412,Hiyoshi 3-14-1, Yokohama, Kanagawa 223, Japan.
EM osawa@ayu.ics.keio.ac.jp
FU JSPS Research Fellowships for Young Scientists
FX The first author was supported in part by the JSPS Research Fellowships
for Young Scientists.
CR BREAZEAL C, 1998, P AUT AG, P14
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kobayashi H, 2001, J HUM EVOL, V40, P419, DOI 10.1006/jhev.2001.0468
KOZIMA H, 2007, MOBILE ROBOTS NEW AP, P269
Miyashita Z, 2008, P INT 2008
Mori M., 1970, ENERGY, V7, P33, DOI DOI 10.1109/MRA.2012.2192811
Murakawa Y, 2006, TECHNICAL REPORT IEI, V131, P31
Narumi M, 2004, LECT NOTES COMPUTER, DOI [10.1007/b99563, DOI 10.1007/B99563]
Nishida Y, 2003, P IEEE RSJ INT C INT, V1, P785
Norman D. A, 1998, DESIGN EVERYDAY THIN
Shibata T, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P2868, DOI 10.1109/ROBOT.1999.774032
Shinozawa K, 2005, INT J HUM-COMPUT ST, V62, P267, DOI
10.1016/j.ijhcs.2004.11.003
Sugiyama O, 2006, P INT C INT ROB SYST, P5843
Tanaka F, 2007, P NATL ACAD SCI USA, V104, P17954, DOI 10.1073/pnas.0707769104
Wada K, 2006, PROCEEDINGS OF THE 3RD INTERNATIONAL SYMPOSIUM ON AUTONOMOUS
MINIROBOTS FOR RESEARCH AND EDUTAINMENT (AMIRE 2005), P325, DOI 10.1007/3-540-
29344-2_48
NR 15
TC 6
Z9 6
U1 0
U2 1
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 1875-4791
EI 1875-4805
J9 INT J SOC ROBOT
JI Int. J. Soc. Robot.
PD JAN
PY 2009
VL 1
IS 1
BP 109
EP 123
DI 10.1007/s12369-008-0004-0
PG 15
WC Robotics
SC Robotics
GA V31OM
UT WOS:000208892900010
DA 2018-01-22
ER

PT J
AU Kawasaki, H
Furukawa, T
Ueki, S
Mouri, T
AF Kawasaki, Haruhisa
Furukawa, Tomohiro
Ueki, Satoshi
Mouri, Tetsuya
TI Virtual Robot Teaching Based on Motion Analysis and Hand Manipulability
for Multi-Fingered Robot
SO JOURNAL OF ADVANCED MECHANICAL DESIGN SYSTEMS AND MANUFACTURING
LA English
DT Article
DE Robot; Teaching; Virtual Reality; Robot Hand
ID INSTRUCTION; PERCEPTION; TASKS
AB A virtual robot teaching system, consisting of human demonstration and motion-
intention analysis in a virtual reality environment, is an advanced form of
automatic programming for multi-fingered robots. We propose a new segmentation
method to analyze human motion data and a virtual teaching system based on hand
manipulability, in which the position and orientation of the robot hand are
determined so as to maximize the robot hand's manipulability. In the segmentation
method, human motion data consisting of contact points, grasp force, hand and
object position, and the like are segmented into plural primitive motions and the
type of task is analyzed based on the sequence of the primitive motions. A trial
assembly task using a humanoid robot hand named Gifu Hand III is shown to
demonstrate the effectiveness of the proposed method.
C1 [Kawasaki, Haruhisa; Furukawa, Tomohiro; Mouri, Tetsuya] Gifu Univ, Dept Human
Informat Engn, Gifu, Japan.
[Ueki, Satoshi] Gifu Univ, Virtual Syst Lab, Gifu, Japan.
RP Kawasaki, H (reprint author), Gifu Univ, Dept Human Informat Engn, 1-1 Yanagito,
Gifu, Japan.
CR Asada H., 1988, Proceedings of the 1988 IEEE International Conference on
Robotics and Automation (Cat. No.88CH2555-1), P1269, DOI 10.1109/ROBOT.1988.12235
CUTKOSKY MR, 1989, IEEE T ROBOTIC AUTOM, V5, P269, DOI 10.1109/70.34763
IKEUCHI K, 1994, IEEE T ROBOTIC AUTOM, V10, P368, DOI 10.1109/70.294211
Kaiser M, 1996, IEEE INT CONF ROBOT, P2700, DOI 10.1109/ROBOT.1996.506570
KANG SB, 1995, IEEE T ROBOTIC AUTOM, V11, P670, DOI 10.1109/70.466599
Kang SB, 1997, IEEE T ROBOTIC AUTOM, V13, P81, DOI 10.1109/70.554349
Kawasaki H, 2000, VSMM 2000: 6TH INTERNATIONAL CONFERENCE ON VIRTUAL SYSTEMS AND
MULTIMEDIA, P456
Kawasaki H., 2000, TVRSJ, V5, P899
KAWASAKI H, 2000, P IECON 2000, P427
KUNIYOSHI T, 1989, P 20 INT S IND ROB, P119
Mouri T., 2002, P INT C CONTR AUT SY, P1288
Mouri T., 2005, P 2005 IEEE RSJ INT, P3474
Rohling R. N., 1993, Proceedings IEEE International Conference on Robotics and
Automation (Cat. No.93CH3247-4), P769, DOI 10.1109/ROBOT.1993.292238
SATO T, 1995, T JRSJ, V13, P545
TAKAHASHI T, 1992, 1992 IEEE INTERNATIONAL CONF ON ROBOTICS AND AUTOMATION :
PROCEEDINGS, VOLS 1-3, P1083, DOI 10.1109/ROBOT.1992.220204
Tsuda M, 1998, IEEE INT CONF ROBOT, P530, DOI 10.1109/ROBOT.1998.677028
*VIRT TECHN INC, 1997, CYBERGLOVE US MAN
VOYLES RM, 1999, IEEE INTELLIGENT NOV, P22
YOSHIKAWA T, 1999, J TVRSJ, V4, P313
NR 19
TC 5
Z9 5
U1 0
U2 6
PU JAPAN SOC MECHANICAL ENGINEERS
PI TOKYO
PA SHINANOMACHI-RENGAKAN BLDG., SHINANOMACHI 35, SHINJUKU-KU, TOKYO,
160-0016, JAPAN
SN 1881-3054
J9 J ADV MECH DES SYST
JI J. Adv. Mech. Des. Syst. Manuf.
PY 2009
VL 3
IS 1
BP 1
EP 12
DI 10.1299/jamdsm.3.1
PG 12
WC Engineering, Manufacturing; Engineering, Mechanical
SC Engineering
GA 505CU
UT WOS:000270666900001
DA 2018-01-22
ER

PT J
AU Hyon, SH
Osu, R
Otaka, Y
Morimoto, J
AF Hyon, Sang-Ho
Osu, Rieko
Otaka, Yohei
Morimoto, Jun
TI Push-recovery strategies implemented on a compliant humanoid robot
SO NEUROSCIENCE RESEARCH
LA English
DT Meeting Abstract
C1 [Hyon, Sang-Ho] NICT, Kyoto, Japan.
[Hyon, Sang-Ho; Osu, Rieko; Morimoto, Jun] ATR, Kyoto, Japan.
[Hyon, Sang-Ho; Morimoto, Jun] ICORP, JST, Tokyo, Japan.
[Otaka, Yohei] Tokyo Bay Rehabil Hosp, Tokyo, Japan.
NR 0
TC 0
Z9 0
U1 0
U2 1
PU ELSEVIER IRELAND LTD
PI CLARE
PA ELSEVIER HOUSE, BROOKVALE PLAZA, EAST PARK SHANNON, CO, CLARE, 00000,
IRELAND
SN 0168-0102
J9 NEUROSCI RES
JI Neurosci. Res.
PY 2009
VL 65
SU 1
BP S183
EP S183
DI 10.1016/j.neures.2009.09.980
PG 1
WC Neurosciences
SC Neurosciences & Neurology
GA 528DK
UT WOS:000272421101344
DA 2018-01-22
ER

PT J
AU Takumi, T
AF Takumi, Toru
TI A humanoid mouse model for autism by a chromosome engineering
SO NEUROSCIENCE RESEARCH
LA English
DT Meeting Abstract
C1 [Takumi, Toru] Hiroshima Univ, Grad Sch Biomed Sci, Hiroshima, Japan.
NR 0
TC 1
Z9 1
U1 0
U2 0
PU ELSEVIER IRELAND LTD
PI CLARE
PA ELSEVIER HOUSE, BROOKVALE PLAZA, EAST PARK SHANNON, CO, CLARE, 00000,
IRELAND
SN 0168-0102
J9 NEUROSCI RES
JI Neurosci. Res.
PY 2009
VL 65
SU 1
BP S27
EP S27
DI 10.1016/j.neures.2009.09.1643
PG 1
WC Neurosciences
SC Neurosciences & Neurology
GA 528DK
UT WOS:000272421100176
DA 2018-01-22
ER

PT J
AU Shiomi, M
Kanda, T
Koizumi, S
Ishiguro, H
Hagita, N
AF Shiomi, Masahiro
Kanda, Takayuki
Koizumi, Satoshi
Ishiguro, Hiroshi
Hagita, Norihiro
TI GROUP ATTENTION CONTROL FOR COMMUNICATION ROBOTS
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Commutation robot; group attention control; field trial; human-robot
interaction
ID INTERACTIVE ROBOTS
AB A humanoid robot can support people in a real environment by interacting with
them through human-like body movements, such as shaking hands, greeting, and
pointing. In real environments, a robot often interacts with groups of people to
provide services, but one-to-many interaction is quite different from one-to-one
interaction. For example, a robot cannot satisfy different demands of people
simultaneously when it interacts with many people. To solve such problems of
interaction, we focused on purpose of attention; it explains why people are aiming
their attention in a particular direction. This paper describes a group attention
control (GAC) system that enables a communication robot to simultaneously interact
with many people. The system is designed to coordinate people's purpose of
attention through behavior that is based on two design policies: controlling
cooperative situations and indicating explicit control. We implemented a semi-
autonomous GAC system in a communication robot that guides visitors to exhibits in
a science museum and engages in free-play interactions with them. We developed the
semi-autonomous system to concentrate on the behavior-generating mechanism in the
GAC system, and to avoid difficulties of sensing problems in real environments. We
investigate the effectiveness of the GAC system through a two-week field trial in
the museum. Experimental results revealed that visitors evaluated the robot with
the GAC system more highly than without it. We believe these results will allow us
to develop interactive humanoid robots that can interact effectively with groups of
people.
C1 [Shiomi, Masahiro; Kanda, Takayuki; Koizumi, Satoshi; Ishiguro, Hiroshi; Hagita,
Norihiro] ATR Intelligent Robot & Commun Labs, Kyoto, Japan.
[Koizumi, Satoshi; Ishiguro, Hiroshi] Osaka Univ, Grad Sch Engn, Osaka, Japan.
RP Shiomi, M (reprint author), ATR Intelligent Robot & Commun Labs, 2-2-2 Keihanna
Sci City, Kyoto, Japan.
EM m-shiomi@atr.jp; kanda@atr.jp; satoshi@ams.eng.osaka-u.ac.jp;
ishiguro@ams.eng.osaka-u.ac.jp; hagita@atr.jp
RI Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825; SHIOMI,
Masahiro/0000-0003-4338-801X
FU Ministry of Internal Affairs and Communications of Japan
FX We wish to thank the staff at the Osaka Science Museum for their kind
cooperation. This research was supported by the Ministry of Internal
Affairs and Communications of Japan.
CR ASOH H, 1997, INT JOINT C ART INT
Breazeal C., 1999, IEEE RSJ INT C INT R
Burgard W., 1998, P NAT C ART INT, P11
DAHLBACK N, 1993, KNOWL-BASED SYST, V6, P258, DOI 10.1016/0950-7051(93)90017-N
DOW S, 2005, PERVASIVE COMPUT, V4
Fujita M, 2001, INT J ROBOT RES, V20, P781, DOI 10.1177/02783640122068092
GREEN A, 2004, 13 IEEE INT WORKSH R
Ishiguro H, 2001, IND ROBOT, V28, P498, DOI 10.1108/01439910110410051
Kanda T, 2004, HUM-COMPUT INTERACT, V19, P61, DOI 10.1207/s15327051hci1901&2_4
KANDA T, 2007, AUTONOM ROBOTS
KOOIJMANS T, 2006, 1 ANN C HUM ROB INT
MORELAND RL, 1982, ADV EXP SOC PSYCHOL, V15, P137, DOI 10.1016/S0065-
2601(08)60297-X
NABE S, 2006, 1 ANN C HUM ROB INT
Nourbakhsh IR, 1999, ARTIF INTELL, V114, P95, DOI 10.1016/S0004-3702(99)00027-2
Shibata T, 2004, P IEEE, V92, P1749, DOI 10.1109/JPROC.2004.835383
SHIOMI M, 2006, ACM 1 ANN C HUM ROB, P305
Siegwart R, 2003, ROBOT AUTON SYST, V42, P203, DOI 10.1016/S0921-8890(02)00376-7
TASAKI T, 2005, P 18 INT C IND ENG A, P111
WOODS S, 2006, AMC 06 9 INT WORKSH
NR 19
TC 3
Z9 3
U1 0
U2 1
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD DEC
PY 2008
VL 5
IS 4
BP 587
EP 608
DI 10.1142/S021984360800156X
PG 22
WC Robotics
SC Robotics
GA 397FJ
UT WOS:000262648000002
DA 2018-01-22
ER

PT J
AU Namikawa, J
Tani, J
AF Namikawa, Jun
Tani, Jun
TI A model for learning to segment temporal sequences, utilizing a mixture
of RNN experts together with adaptive variance
SO NEURAL NETWORKS
LA English
DT Article
DE Recurrent neural network; Mixture of experts; Maximum likelihood
estimation; Self-organization; Segmentation of temporal sequences
ID FUNCTIONAL DYNAMICS; NETWORKS; SYSTEMS
AB This paper proposes a novel learning method for a mixture of recurrent neural
network (RNN) experts model, which can acquire the ability to generate desired
sequences by dynamically switching between experts. Our method is based on maximum
likelihood estimation, using a gradient descent algorithm. This approach is similar
to that used in conventional methods; however, we modify the likelihood function by
adding a mechanism to alter the variance for each expert. The proposed method is
demonstrated to successfully learn Markov chain switching among a set of 9
Lissajous curves, for which the conventional method fails. The learning
performance, analyzed in terms of the generalization capability, of the proposed
method is also shown to be superior to that of the conventional method. With the
addition of a gating network, the proposed method is successfully applied to the
learning of sensory-motor flows for a small humanoid robot as a realistic problem
of time series prediction and generation. (C) 2008 Elsevier Ltd. All rights
reserved.
C1 [Namikawa, Jun; Tani, Jun] RIKEN, Brain Sci Inst, Wako, Saitama 3510198, Japan.
RP Namikawa, J (reprint author), RIKEN, Brain Sci Inst, 2-1 Hirosawa, Wako, Saitama
3510198, Japan.
EM jnamika@bdc.brain.riken.go.jp; tani@brain.riken.go.jp
CR BENGIO Y, 1994, IEEE T NEURAL NETWOR, V5, P157, DOI 10.1109/72.279181
ELMAN JL, 1990, COGNITIVE SCI, V14, P179, DOI 10.1016/0364-0213(90)90002-E
Evans G., 1981, WITTGENSTEIN FOLLOW, P118
Hammer B, 2003, NEURAL COMPUT, V15, P1897, DOI 10.1162/08997660360675080
Hichreiter S., 1997, NEURAL COMPUT, V9, P1735
IGARI I, 2008, NEUROCOMPUT IN PRESS
Ikeda K., 1989, PROG THEOR PHYS SUPP, V99, P295
Jacobs RA, 1991, NEURAL COMPUT, V3, P79, DOI 10.1162/neco.1991.3.1.79
Jaeger H, 2004, SCIENCE, V304, P78, DOI 10.1126/science.1091277
Jaeger H., 2001, GMD REPORT, V152, P1
JORDAN MI, 1994, NEURAL COMPUT, V6, P181, DOI 10.1162/neco.1994.6.2.181
Jordan M. I., 1988, ATTENTION PERFORMANC, VXIII
Jordan M. I., 1986, P 8 ANN C COGN SCI S, P531
Kaneko K, 2003, CHAOS, V13, P926, DOI 10.1063/1.1607783
Kataoka N, 2000, PHYSICA D, V138, P225, DOI 10.1016/S0167-2789(99)00230-4
Kataoka N, 2003, PHYSICA D, V181, P235, DOI 10.1016/S0167-2789(03)00100-3
Kataoka N, 2001, PHYSICA D, V149, P174, DOI 10.1016/S0167-2789(00)00203-7
Maass W., 2002, FRESH LOOK REAL TIME
Namikawa J, 2004, NONLINEARITY, V17, P1317, DOI 10.1088/0951-7715/17/4/009
NAMIKAWA J, NEURAL NETWORK UNPUB
POLLACK JB, 1991, MACH LEARN, V7, P227, DOI 10.1023/A:1022651113306
Rumelhart D.E., 1986, PARALLEL DISTRIBUTED, V1, P318
SATO Y, 2000, J UNIVERS COMPUT SCI, V6, P881
Schmidhuber J, 2002, NEURAL COMPUT, V14, P2039, DOI 10.1162/089976602320263980
Tani J, 1999, NEURAL NETWORKS, V12, P1131, DOI 10.1016/S0893-6080(99)00060-X
Tani J, 2008, IEEE T SYST MAN CY B, V38, P43, DOI 10.1109/TSMCB.2007.907738
Williams RJ, 1989, NEURAL COMPUT, V1, P270, DOI 10.1162/neco.1989.1.2.270
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
NR 28
TC 12
Z9 13
U1 0
U2 1
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 0893-6080
J9 NEURAL NETWORKS
JI Neural Netw.
PD DEC
PY 2008
VL 21
IS 10
BP 1466
EP 1475
DI 10.1016/j.neunet.2008.09.005
PG 10
WC Computer Science, Artificial Intelligence; Neurosciences
SC Computer Science; Neurosciences & Neurology
GA 387BG
UT WOS:000261926100008
PM 18938059
DA 2018-01-22
ER

PT J
AU Hauser, K
Bretl, T
Latombe, JC
Harada, K
Wilcox, B
AF Hauser, Kris
Bretl, Timothy
Latombe, Jean-Claude
Harada, Kensuke
Wilcox, Brian
TI Motion Planning for Legged Robots on Varied Terrain
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article; Proceedings Paper
CT 7th International Workshop on Algorithmic Foundations of Robotics
CY JUL 16-18, 2006
CL New York, NY
DE Motion planning; legged robots; humanoids; probabilistic sample-based
planning; motion primitives
ID INVERSE KINEMATICS; CONTACT; OPTIMIZATION; STYLE
AB In this paper we study the quasi-static motion of large legged robots that have
many degrees of freedom. While gaited walking may suffice on easy ground, rough and
steep terrain requires unique sequences of footsteps and postural adjustments
specifically adapted to the terrain's local geometric and physical properties. In
this paper we present a planner that computes these motions by combining graph
searching to generate a sequence of candidate footfalls with probabilistic sample-
based planning to generate continuous motions that reach these footfalls. To
improve motion quality, the probabilistic planner derives its sampling strategy
from a small set of motion primitives that have been generated offline. The
viability of this approach is demonstrated in simulation for the six-legged Lunar
vehicle ATHLETE and the humanoid HRP-2 on several example terrains, including one
that requires both hand and foot contacts and another that requires rappelling.
C1 [Hauser, Kris; Latombe, Jean-Claude] Stanford Univ, Dept Comp Sci, Stanford, CA
94305 USA.
[Bretl, Timothy] Univ Illinois, Urbana, IL 61801 USA.
[Harada, Kensuke] Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res Inst,
Humanoid Res Grp, Tsukuba, Ibaraki 3058568, Japan.
[Wilcox, Brian] CALTECH, Jet Prop Lab, Pasadena, CA 91109 USA.
RP Hauser, K (reprint author), Stanford Univ, Dept Comp Sci, Stanford, CA 94305
USA.
EM khauser@cs.stanford.edu; tbretl@illinois.edu; latombe@cs.stanford.edu;
kensuke.harada@aist.go.jp; Brian.H.Wilcox@jpl.nasa.gov
CR AKINC M, 2003, P INT S ROB RES SIEN
ALAMI R, 1995, ALGORITHMIC FOUNDATIONS OF ROBOTICS, P109
ARUN KS, 1987, IEEE T PATTERN ANAL, V9, P699, DOI 10.1109/TPAMI.1987.4767965
Bares JE, 1999, INT J ROBOT RES, V18, P621, DOI 10.1177/02783649922066475
Bevly D. M., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P4009, DOI 10.1109/ROBOT.2000.845356
Bicchi A., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P348, DOI 10.1109/ROBOT.2000.844081
Bobrow JE, 2001, J ROBOTIC SYST, V18, P785, DOI 10.1002/rob.8116
Boissonnat JD, 2000, SIAM J COMPUT, V30, P218, DOI 10.1137/S0097539797326289
Boyd S., 2004, CONVEX OPTIMIZATION
Bretl T, 2006, INT J ROBOT RES, V25, P317, DOI 10.1177/0278364906063979
BRETL T, 2008, IEEE T ROBO IN PRESS
BRETL T, 2006, P IEEE INT C ROB AUT
BRETL T, P INT S ROB RES SIEN
Burridge RR, 1999, INT J ROBOT RES, V18, P534, DOI 10.1177/02783649922066385
Byrd RH, 2002, 20024 OTC NW U
Chestnutt J., 2003, P IEEE INT C HUM ROB
Choset H. M., 2005, PRINCIPLES ROBOT MOT
CORTES J, 2002, P IEEE INT C ROB AUT
ELDERSHAW C, P IEEE INT C ROB AUT
ESTIER T, 2000, P SPAC ROB ALB NM
FRAZZOLI E, 2002, IEEE T ROBOT, V25, P116
Frazzoli E., 2002, AIAA J GUIDANCE CONT, V25, P116
Gavrilets V, 2001, INT J ROBOT RES, V20, P795, DOI 10.1177/02783640122068100
GERAERTS R, 2004, P IEEE INT C ROB AUT
Gleicher M., 1998, P 25 ANN C COMP GRAP, V98, P33
Gottschalk S., 1996, Computer Graphics Proceedings. SIGGRAPH '96, P171
Greenfield A, 2005, INT J ROBOT RES, V24, P911, DOI 10.1177/0278364905059056
Grochow K, 2004, ACM T GRAPHIC, V23, P522, DOI 10.1145/1015706.1015755
HARADA K, 2004, P INT S EXP ROB SING
HARADA K, 2006, P IEEE RSJ INT C INT
HAUSER K, 2005, P HUM TSUK JAP
Heiken G. H., 1991, LUNAR SOURCEBOOK USE
HIROSE S, INT J ROBOTICS RES, V10, P3
HIROSE S, P IEEE INT C ROB AUT, P494
HSU D, 2005, P INT S ROB RES SAN
Iagnemma K., 1999, P IEEE INT C ROB AUT
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
Kavraki LE, 1996, IEEE T ROBOTIC AUTOM, V12, P566, DOI 10.1109/70.508439
KOGA Y, 1994, IEEE INT CONF ROBOT, P945, DOI 10.1109/ROBOT.1994.351231
Kovar L, 2002, P 29 ANN C COMP GRAP, P473
KRON T, 2005, P EUR ACM SIGGRAPH S, P29
Krotkov E, 1996, INT J ROBOT RES, V15, P155, DOI 10.1177/027836499601500204
KUFFNER J, 2003, P INT S ROB RES SIEN
KUFFNER JJ, 1999, THESIS STANFORD U
LAUMOND JP, 1994, IEEE T ROBOTIC AUTOM, V10, P577, DOI 10.1109/70.326564
Laumond J.P, 1987, P INT JOINT C ART IN, P1120
LAURIA M, 2002, P CLAWAR
Lawrence Craig, 1997, TR9416R1 U MAR I SYS
LEE H, 2006, P IEEE INT C ROB AUT
LIAO L, 2005, ADV NEURAL INFORM PR
Liu CK, 2005, ACM T GRAPHIC, V24, P1071, DOI 10.1145/1073204.1073314
Meredith M., 2005, COMPUT ENTERTAIN, V3, P5
Miller T. G., 2006, P IFAC S MECH SYST H
MISSIURO P, 2006, P IEEE INT C ROB AUT
Mumm E, 2004, AUTON ROBOT, V16, P259, DOI 10.1023/B:AURO.0000025790.37415.83
NG A, 2004, NEURAL INFORM PROCES, V16
Nielsen CL, 2000, 2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS, P1716, DOI 10.1109/IROS.2000.895219
Okamura A. M., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P255, DOI 10.1109/ROBOT.2000.844067
Pai D. K., 1995, Autonomous Robots, V2, P191, DOI 10.1007/BF00710856
Pang JS, 2000, Z ANGEW MATH MECH, V80, P643, DOI 10.1002/1521-
4001(200010)80:10<643::AID-ZAMM643>3.0.CO;2-E
PETTRE J, 2003, P EUR SIGGRAPH S COM
Popovic MB, 2005, INT J ROBOT RES, V24, P1013, DOI 10.1177/0278364905058363
Popovit Z., 1999, P 26 ANN C COMP GRAP, P11
Ren L, 2005, ACM T GRAPHIC, V24, P1090, DOI 10.1145/1073204.1073316
Rimon E, 2006, IEEE T ROBOT, V22, P240, DOI 10.1109/TRO.2005.862478
Sahbani A, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P1560, DOI 10.1109/IRDS.2002.1043977
Sanchez G, 2002, INT J ROBOT RES, V21, P5, DOI 10.1177/027836402320556458
SCHWARZER F, 2002, P WAFR NIC FRANC
SENTIS L, INT J HUMANOID ROBOT, V2, P505
SHAPIRO A, 2003, P IEEE INT C ROB AUT, P2966
Shin HJ, 2001, ACM T GRAPHIC, V20, P67, DOI 10.1145/502122.502123
SONG G, 2001, P IEEE INT C ROB AUT, P1500
Song SM, 1989, MACHINES WALK ADAPTI
STILMAN M, P WAFR NEW YORK NY
VOUGIOUKAS SG, 2005, P IEEE INT C ROB AUT
WANG LCT, 1991, IEEE T ROBOTIC AUTOM, V7, P489, DOI 10.1109/70.86079
Wettergreen D., 1993, Robotics and Autonomous Systems, V11, P171, DOI
10.1016/0921-8890(93)90022-5
Wilcox BH, 2007, J FIELD ROBOT, V24, P421, DOI 10.1002/rob.20193
Witkin A., 1995, P 22 ANN C COMP GRAP, P105, DOI DOI 10.1145/218380.218422
Yakey JH, 2001, IEEE T ROBOTIC AUTOM, V17, P951, DOI 10.1109/70.976030
Yamane K, 2004, ACM T GRAPHIC, V23, P532, DOI 10.1145/1015706.1015756
Yoneda K., 1999, P IEEE RSJ INT C INT, P1897
ZHENG YF, 1990, IEEE T ROBOTIC AUTOM, V6, P86, DOI 10.1109/70.88120
NR 83
TC 61
Z9 61
U1 3
U2 15
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD NOV
PY 2008
VL 27
IS 11-12
BP 1325
EP 1349
DI 10.1177/0278364908098447
PG 25
WC Robotics
SC Robotics
GA 375EA
UT WOS:000261095700011
DA 2018-01-22
ER

PT J
AU Yamashita, Y
Tani, J
AF Yamashita, Yuichi
Tani, Jun
TI Emergence of Functional Hierarchy in a Multiple Timescale Neural Network
Model: A Humanoid Robot Experiment
SO PLOS COMPUTATIONAL BIOLOGY
LA English
DT Article
ID TIME-SCALES; SELF-ORGANIZATION; PREMOTOR CORTEX; DYNAMICAL-SYSTEMS;
INSPIRED ROBOTICS; CEREBRAL-CORTEX; MOTOR; MONKEY; AREA; REINFORCEMENT
AB It is generally thought that skilled behavior in human beings results from a
functional hierarchy of the motor control system, within which reusable motor
primitives are flexibly integrated into various sensori-motor sequence patterns.
The underlying neural mechanisms governing the way in which continuous sensori-
motor flows are segmented into primitives and the way in which series of primitives
are integrated into various behavior sequences have, however, not yet been
clarified. In earlier studies, this functional hierarchy has been realized through
the use of explicit hierarchical structure, with local modules representing motor
primitives in the lower level and a higher module representing sequences of
primitives switched via additional mechanisms such as gate-selecting. When
sequences contain similarities and overlap, however, a conflict arises in such
earlier models between generalization and segmentation, induced by this separated
modular structure. To address this issue, we propose a different type of neural
network model. The current model neither makes use of separate local modules to
represent primitives nor introduces explicit hierarchical structure. Rather than
forcing architectural hierarchy onto the system, functional hierarchy emerges
through a form of self-organization that is based on two distinct types of neurons,
each with different time properties ("multiple timescales"). Through the
introduction of multiple timescales, continuous sequences of behavior are segmented
into reusable primitives, and the primitives, in turn, are flexibly integrated into
novel sequences. In experiments, the proposed network model, coordinating the
physical body of a humanoid robot through high-dimensional sensori-motor control,
also successfully situated itself within a physical environment. Our results
suggest that it is not only the spatial connections between neurons but also the
timescales of neural activity that act as important mechanisms leading to
functional hierarchy in neural systems.
C1 [Yamashita, Yuichi; Tani, Jun] RIKEN, Brain Sci Inst, Lab Behav & Dynam Cognit,
Wako, Saitama, Japan.
RP Yamashita, Y (reprint author), RIKEN, Brain Sci Inst, Lab Behav & Dynam Cognit,
Wako, Saitama, Japan.
EM yamay@brain.riken.jp
FU Japanese Ministry of Education, Culture, Sports, Science and Technology
FX This study was conducted through a collaboration with Sony Corporation.
The study was partially supported by a Grant-in-Aid for Scientific
Research on Priority Areas "Emergence of Adaptive Motor Function through
Interaction between Body, Brain and Environment" from the Japanese
Ministry of Education, Culture, Sports, Science and Technology.
CR Arbib M. A., 1998, NEURAL ORG STRUCTURE
ASANUMA H, 1972, EXP BRAIN RES, V14, P243, DOI 10.1007/BF00816161
Boemio A, 2005, NAT NEUROSCI, V8, P389, DOI 10.1038/nn1409
Cisek P, 2005, NEURON, V45, P801, DOI 10.1016/j.neuron.2005.01.027
DOYA K, 1989, NEURAL NETWORKS, V2, P375, DOI 10.1016/0893-6080(89)90022-1
DOYA K, 2001, NEUROSCI RES S, V24, pS16
Dum RP, 2002, PHYSIOL BEHAV, V77, P677, DOI 10.1016/S0031-9384(02)00929-0
ELMAN JL, 1990, COGNITIVE SCI, V14, P179, DOI 10.1016/0364-0213(90)90002-E
Felleman DJ, 1991, CEREB CORTEX, V1, P1, DOI 10.1093/cercor/1.1.1
Fetz EE, 2002, HDB BRAIN THEORY NEU
Fiete IR, 2007, J NEUROPHYSIOL, V98, P2038, DOI 10.1152/jn.01311.2006
Friston KJ, 2005, PHILOS T R SOC B, V360, P815, DOI 10.1098/rstb.2005.1622
Fuster JM, 2001, NEURON, V30, P319, DOI 10.1016/S0896-6273(01)00285-9
GISZTER SF, 1993, J NEUROSCI, V13, P467
Graziano MSA, 2007, NEURON, V56, P239, DOI 10.1016/j.neuron.2007.09.013
Graziano MSA, 2002, NEURON, V36, P349, DOI 10.1016/S0896-6273(02)01003-6
GRIMES DB, 2006, ADV NEURAL INF PROCE, V19, P521
Haruno M, 2001, NEURAL COMPUT, V13, P2201, DOI 10.1162/089976601750541778
Hilgetag CC, 2000, PHILOS T ROY SOC B, V355, P71, DOI 10.1098/rstb.2000.0550
Honey CJ, 2008, HUM BRAIN MAPP, V29, P802, DOI 10.1002/hbm.20579
Honey CJ, 2007, P NATL ACAD SCI USA, V104, P10240, DOI 10.1073/pnas.0701519104
Hoshi E, 2007, CURR OPIN NEUROBIOL, V17, P234, DOI 10.1016/j.conb.2007.02.003
Hubener M, 1997, J NEUROSCI, V17, P9270
Huys R, 2004, MOTOR CONTROL, V8, P188, DOI 10.1123/mcj.8.2.188
JEANNEROD M, 1994, BEHAV BRAIN SCI, V17, P187, DOI 10.1017/S0140525X00034026
JORDAN MI, 1992, COGNITIVE SCI, V16, P307, DOI 10.1016/0364-0213(92)90036-T
Kakei S, 1999, SCIENCE, V285, P2136, DOI 10.1126/science.285.5436.2136
Kang S, 2008, PLOS COMPUT BIOL, V4, DOI 10.1371/journal.pcbi.1000022
KEMERE C, J NEUROPHYS IN PRESS
KOHONEN T, 1996, SELF ORG MAPS
Kording KP, 2007, NAT NEUROSCI, V10, P779, DOI 10.1038/nn1901
Kuniyoshi Y, 2006, BIOL CYBERN, V95, P589, DOI 10.1007/s00422-006-0127-z
LI M, 1996, PHYS REV E, V74
Mulliken GH, 2008, P NATL ACAD SCI USA, V105, P8170, DOI 10.1073/pnas.0802602105
MUSHIAKE H, 1990, EXP BRAIN RES, V82, P208
Mussa-Ivaldi FA, 2000, PHILOS T R SOC B, V355, P1755, DOI 10.1098/rstb.2000.0733
Newell KM, 2001, PSYCHOL REV, V108, P57, DOI 10.1037//0033-295X.108.1.57
Nishimoto R, 2004, NEURAL NETWORKS, V17, P925, DOI 10.1016/j.neunet.2004.02.003
Nishimoto R, 2008, ADAPT BEHAV, V16, P166, DOI 10.1177/1059712308089185
Nolfi S, 2002, CONNECT SCI, V14, P231, DOI 10.1080/0954009021000055703
Paine RW, 2005, ADAPT BEHAV, V13, P211, DOI 10.1177/105971230501300303
Pfeifer R, 2007, SCIENCE, V318, P1088, DOI 10.1126/science.1145803
Poeppel D, 2008, PHILOS T R SOC B, V363, P1071, DOI 10.1098/rstb.2007.2160
Rizzolatti G, 2001, NEURON, V31, P889, DOI 10.1016/S0896-6273(01)00423-8
RIZZOLATTI G, 1988, EXP BRAIN RES, V71, P491, DOI 10.1007/BF00248742
Rumelhart D. E., 1986, PARALLEL DISTRIBUTED
Sakai K, 2003, EXP BRAIN RES, V152, P229, DOI 10.1007/s00221-003-1548-8
SCHILLER PH, 1990, TRENDS NEUROSCI, V13, P392, DOI 10.1016/0166-2236(90)90117-S
Seung HS, 2003, NEURON, V40, P1063, DOI 10.1016/S0896-6273(03)00761-X
Shibata T, 2005, NEURAL NETWORKS, V18, P213, DOI 10.1016/j.neunet.2005.01.001
Smith MA, 2006, PLOS BIOL, V4, P1035, DOI 10.1371/journal.pbio.0040179
Sporns O, 2007, PLOS ONE, V2, DOI 10.1371/journal.pone.0001049
Tani J, 1996, IEEE T SYST MAN CY B, V26, P421, DOI 10.1109/3477.499793
Tani J, 1999, NEURAL NETWORKS, V12, P1131, DOI 10.1016/S0893-6080(99)00060-X
Tani J, 2008, NEURAL NETWORKS, V21, P584, DOI 10.1016/j.neunet.2008.03.008
Tani J, 2008, IEEE T SYST MAN CY B, V38, P43, DOI 10.1109/TSMCB.2007.907738
Thelen E, 2001, BEHAV BRAIN SCI, V24, P1, DOI 10.1017/S0140525X01003910
Thoroughman KA, 2000, NATURE, V407, P742
TOOTELL RB, 1981, SCIENCE, V214, P813, DOI 10.1126/science.7292014
Varela F, 2001, NAT REV NEUROSCI, V2, P229, DOI 10.1038/35067550
VARELA F., 1991, EMBODIED MIND
Vuilleumier P, 2003, NAT NEUROSCI, V6, P624, DOI 10.1038/nn1057
WEINRICH M, 1984, BRAIN, V107, P385, DOI 10.1093/brain/107.2.385
WEINRICH M, 1982, J NEUROSCI, V2, P1329
WOLPERT DM, 1995, SCIENCE, V269, P1880, DOI 10.1126/science.7569931
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
Xie XH, 2004, PHYS REV E, V69, DOI 10.1103/PhysRevE.69.041909
NR 67
TC 134
Z9 135
U1 1
U2 16
PU PUBLIC LIBRARY SCIENCE
PI SAN FRANCISCO
PA 185 BERRY ST, STE 1300, SAN FRANCISCO, CA 94107 USA
SN 1553-734X
J9 PLOS COMPUT BIOL
JI PLoS Comput. Biol.
PD NOV
PY 2008
VL 4
IS 11
AR e1000220
DI 10.1371/journal.pcbi.1000220
PG 18
WC Biochemical Research Methods; Mathematical & Computational Biology
SC Biochemistry & Molecular Biology; Mathematical & Computational Biology
GA 380QR
UT WOS:000261480800012
PM 18989398
OA gold
DA 2018-01-22
ER

PT J
AU Yoshida, E
Esteves, C
Belousov, I
Laumond, JP
Sakaguchi, T
Yokoi, K
AF Yoshida, Eiichi
Esteves, Claudia
Belousov, Igor
Laumond, Jean-Paul
Sakaguchi, Takeshi
Yokoi, Kazuhito
TI Planning 3-D Collision-Free Dynamic Robotic Motion Through Iterative
Reshaping
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article; Proceedings Paper
CT IEEE/RSJ International Conference on Intelligent Robots and Systems
CY 2006
CL Beijing, PEOPLES R CHINA
SP IEEE, RSJ
DE Collision avoidance; dynamics; humanoid; motion planning; space
manipulator
ID CONFIGURATION-SPACES; MANIPULATORS; CONSTRAINT; ANIMATION
AB We propose a general and practical planning framework for generating 3-D
collision-free motions that take complex robot dynamics into account. The framework
consists of two stages that are applied iteratively. In the first stage, a
collision-free path is obtained through efficient geometric and kinematic sampling-
based motion planning. In the second stage, the path is transformed into
dynamically executable robot trajectories by dedicated dynamic motion generators.
In the proposed iterative method, those dynamic trajectories are sent back again to
the first stage to check for collisions. Depending on the application, temporal or
spatial reshaping methods are used to treat detected collisions. Temporal reshaping
adjusts the velocity, whereas spatial reshaping deforms the path itself. We
demonstrate the effectiveness of the proposed method through examples of a space
manipulator with highly non-linear dynamics and a humanoid robot executing dynamic
manipulation and locomotion at the same time.
C1 [Yoshida, Eiichi; Sakaguchi, Takeshi; Yokoi, Kazuhito] Natl Inst Adv Ind Sci &
Technol, Joint French Japanese Robot Lab JRL, CNRS, ST21,AIST IS, Tsukuba, Ibaraki
3058568, Japan.
[Esteves, Claudia] Univ Guanajuato, Fac Matemat, Guanajuato 36000, Gto, Mexico.
[Belousov, Igor] HP Labs, Moscow 115054, Russia.
[Laumond, Jean-Paul] Univ Toulouse, LAAS, CNRS, Joint French Japanese Robot Lab
JRL, F-31077 Toulouse, France.
RP Yoshida, E (reprint author), Natl Inst Adv Ind Sci & Technol, Joint French
Japanese Robot Lab JRL, CNRS, ST21,AIST IS, 1-1-1 Umezono, Tsukuba, Ibaraki
3058568, Japan.
EM e.yoshida@aist.go.jp
RI Yokoi, Kazuhito/K-2046-2012; Yoshida, Eiichi/M-3756-2016; Sakaguchi,
Takeshi/M-4024-2016
OI Yokoi, Kazuhito/0000-0003-3942-2027; Yoshida,
Eiichi/0000-0002-3077-6964; Sakaguchi, Takeshi/0000-0002-2726-7448
CR Baerlocher P, 2004, VISUAL COMPUT, V20, P402, DOI 10.1007/s00371-004-0244-4
Baginski B., 1997, Proceedings of the 1997 IEEE/RSJ International Conference on
Intelligent Robot and Systems. Innovative Robotics for Real-World Applications.
IROS '97 (Cat. No.97CH36108), P1714, DOI 10.1109/IROS.1997.656591
BELOUSOV I, 1995, P 7 INT C ADV ROB IC, P195
BOBROW JE, 1985, INT J ROBOT RES, V4, P3, DOI 10.1177/027836498500400301
Brock O, 2002, INT J ROBOT RES, V21, P1031, DOI 10.1177/0278364902021012002
Choset H., 2006, PRINCIPLES ROBOT MOT
DONALD B, 1993, J ACM, V40, P1048, DOI 10.1145/174147.174150
DUBINS LE, 1957, AM J MATH, V79, P497, DOI 10.2307/2372560
Esteves C, 2006, ACM T GRAPHIC, V25, P319, DOI 10.1145/1138450.1138457
FERRE E, 2004, P IEEE INT C ROB AUT, P3149
Gleicher M, 2001, GRAPH MODELS, V63, P107, DOI 10.1006/gmod.2001.0549
HARADA H, 2005, P 2005 IEEE INT C RO, P1712
Hirzinger G., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P898, DOI 10.1109/ROBOT.2000.844163
Hsu D, 1999, INT J COMPUT GEOM AP, V9, P495, DOI 10.1142/S0218195999000285
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
Kavraki LE, 1996, IEEE T ROBOTIC AUTOM, V12, P566, DOI 10.1109/70.508439
Khatib M, 1997, IEEE INT CONF ROBOT, P2920, DOI 10.1109/ROBOT.1997.606730
KUFFNER J, 1998, PHYSIQUAL MAR
Kuffner JJ, 2002, AUTON ROBOT, V12, P105, DOI 10.1023/A:1013219111657
Lamiraux F, 2004, IEEE T ROBOTIC AUTOM, V20, P967, DOI 10.1109/TRO.2004.829459
Laumond JP, 2006, IEEE ROBOT AUTOM MAG, V13, P90, DOI 10.1109/MRA.2006.1638020
LAVALLE S, 2002, ALGORITHMIC COMPUTAT, P293
LaValle SM, 2001, INT J ROBOT RES, V20, P378, DOI 10.1177/02783640122067453
LaValle S. M., 2006, PLANNING ALGORITHM
MORISAWA M, 1994, P IEEE INT C ROB AUT, P3989
Nakamura Y, 1991, ADV ROBOTICS REDUNDA
NISHIWAKI K, 1972, P IEEE INT C ROB AUT, P2667
Oda M., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International
Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065),
P914, DOI 10.1109/ROBOT.2000.844165
Parent R., 2002, COMPUTER ANIMATION A
Quinlan S., 1993, Proceedings IEEE International Conference on Robotics and
Automation (Cat. No.93CH3247-4), P802, DOI 10.1109/ROBOT.1993.291936
Sentis L, 2006, IEEE INT CONF ROBOT, P2641, DOI 10.1109/ROBOT.2006.1642100
SHIN KG, 1985, IEEE T AUTOMAT CONTR, V30, P531
TAKUBO T, 2008, P IEEE RSJ INT C INT, P1180
Yamane K, 2003, IEEE T VIS COMPUT GR, V9, P352, DOI 10.1109/TVCG.2003.1207443
YANG Y, 2006, P ROB SCI SYST PHIL
YOSHIDA E, 2006, P IEEE RSJ INT C INT, P827
Yoshida E., 2005, P 2005 IEEE RAS INT, P1
YOSHIDA E, 2006, P IEEE RAS INT C HUM, P208
YOSHIDA E, 2006, J APPL BIONICS BIOME, V3, P227
NR 41
TC 47
Z9 48
U1 0
U2 9
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD OCT
PY 2008
VL 24
IS 5
SI SI
BP 1186
EP 1198
DI 10.1109/TRO.2008.2002312
PG 13
WC Robotics
SC Robotics
GA 371XG
UT WOS:000260865400023
DA 2018-01-22
ER

PT J
AU Gutmann, JS
Fukuchi, M
Fujita, M
AF Gutmann, Jens-Steffen
Fukuchi, Masaki
Fujita, Masahiro
TI 3D perception and environment map generation for humanoid robot
navigation
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE humanoid robot navigation; 3D environment perception; range image
segmentation; stereo vision
ID FAST SEGMENTATION; RANGE IMAGES; ALGORITHM; VISION; INDOOR
AB A humanoid robot that can go up and down stairs, crawl underneath obstacles or
simply walk around requires reliable perceptual capabilities for obtaining accurate
and useful information about its surroundings. In this work we present a system for
generating three-dimensional (3D) environment maps from data taken by stereo
vision. At the core is a method for precise segmentation of range data into planar
segments based on the algorithm of scan-line grouping extended to cope with the
noise dynamics of stereo vision. In off-line experiments we demonstrate that our
extensions achieve a more precise segmentation. When compared to a previously
developed patchlet method, we obtain a richer segmentation with a higher accuracy
while also requiring far less computations. From the obtained segmentation we then
build a 3D environment map using occupancy grid and floor height maps. The
resulting representation classifies areas into one of six different types while
also providing object height information. We apply our perception method for the
navigation of the humanoid robot QRIO and present experiments of the robot stepping
through narrow space, walking up and down stairs and crawling underneath a table.
C1 [Gutmann, Jens-Steffen; Fukuchi, Masaki; Fujita, Masahiro] Sony Corp,
Intelligent Syst Res Lab, Tokyo, Japan.
RP Gutmann, JS (reprint author), Sony Corp, Intelligent Syst Res Lab, Tokyo, Japan.
EM masahirof@jp.sony.com
CR Checchin P, 1997, INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN 3-D DIGITAL
IMAGING AND MODELING, PROCEEDINGS, P156, DOI 10.1109/IM.1997.603861
CHESTNUTT J, 2003, INT C HUM ROB HUM KA
DAVISON A, 2003, INT C COMP VIS ICCV, P173
Duda R., 1973, PATTERN CLASSIFICATI
FITZGIBBON AW, 1994, EUR C COMP VIS STOCK, P173
FUJITA M, 2003, INT C ADV INT MECH A, P938
GEE A, 2007, BRIT MACH VIS C BMVC
GUTMANN J, 2005, INT JOINT C ART INT, P1232
GUTMANN JS, 2005, INT C ROB AUT ICRA B
GUTMANN JS, 2004, S INT AUT VEH IAV LI
GUTMANN JS, 2005, INT C HUM ROB HUM TS, P26
GUTMANN JS, 2004, INT C INT ROB SYST I
Hahnel D, 2003, ROBOT AUTON SYST, V44, P15, DOI 10.1016/S0921-8890(03)00007-1
Haindl M, 1998, INT C PATT RECOG, P985, DOI 10.1109/ICPR.1998.711853
Hoover A, 1996, IEEE T PATTERN ANAL, V18, P673, DOI 10.1109/34.506791
IOCCHI L, 2000, INT S EXP ROB ISER H
JIANG XY, 1994, MACH VISION APPL, V7, P115, DOI 10.1007/BF01215806
Kagami S, 2003, IEEE INT CONF ROBOT, P2141, DOI 10.1109/ROBOT.2003.1241910
KANEHIRO F, 2005, INT C ROB AUT ICRA B, P1084
Karlsson N, 2005, IEEE INT CONF ROBOT, P24
Lakaemper R, 2006, INT C PATT RECOG, P1077
Li TY, 2003, IEEE INT CONF ROBOT, P3421
LORCH O, 2002, INT C INT ROB SYST I, P2484
LU E, 1995, THESIS U TORONTO
Michel P., 2007, IEEE RSJ INT C INT R, P463
MURRAY D, 2004, INT C INT ROB SYST I, P3116
MURRAY D, 2003, THESIS U BRIT COLUMB
NUCHTER A, 2003, INT C REC ADV 3D DIG, P394
Okada K., 2001, IEEE INT C ROB AUT, P2120
Sabe K, 2004, IEEE INT CONF ROBOT, P592, DOI 10.1109/ROBOT.2004.1307213
Shiller Z, 2001, IEEE INT CONF ROBOT, P1, DOI 10.1109/ROBOT.2001.932521
Thrun S, 2004, IEEE T ROBOTIC AUTOM, V20, P433, DOI 10.1109/TRA.2004.825520
Triebel R., 2006, INT C INT ROB SYST I
Vidal R, 2005, IEEE T PATTERN ANAL, V27, P1945, DOI 10.1109/TPAMI.2005.244
Weingarten JW, 2004, IEEE INT CONF ROBOT, P927, DOI 10.1109/ROBOT.2004.1307268
NR 35
TC 56
Z9 58
U1 2
U2 17
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD OCT
PY 2008
VL 27
IS 10
BP 1117
EP 1134
DI 10.1177/0278364908096316
PG 18
WC Robotics
SC Robotics
GA 350ZF
UT WOS:000259391800002
DA 2018-01-22
ER

PT J
AU Zhao, HL
Sugisaka, M
AF Zhao, Huailin
Sugisaka, Masanori
TI Simulation study of CMAC control for the robot joint actuated by
McKibben muscles
SO APPLIED MATHEMATICS AND COMPUTATION
LA English
DT Article
DE CMAC; McKibben muscle; robot joint; simulation
AB The paper simply introduces the CMAC model. It explains how to train the CMAC
off-line by the data from fuzzy control theory. Then two control systems based on
CMAC for controlling a humanoid robot joint actuated by a pair of McKibben muscles
are designed. One is based on pure CMAC and another one based on both the CMAC and
PID. The simulations with different frequencies of the input function show that the
control systems are stable. As to the two controllers, the paper discusses their
on-line learning algorithms and compares them with each other. (C) 2008 Elsevier
Inc. All rights reserved.
C1 [Zhao, Huailin] Shanghai Inst Technol, Sch Mech & Automat Engn, Shanghai,
Peoples R China.
[Zhao, Huailin; Sugisaka, Masanori] Oita Univ, Dept Elect & Elect Engn, Oita,
Japan.
RP Zhao, HL (reprint author), Shanghai Inst Technol, Sch Mech & Automat Engn,
Shanghai, Peoples R China.
EM zhao_Huailin@yahoo.com; msugi@cc.oita-u.ac.jp
FU Department of Science and Technology, Henan Province, China
[072102240022]
FX This study is supported by the project from Department of Science and
Technology, Henan Province, China (the Project No. 072102240022)
CR ALBUS JS, 1975, J DYNAMIC SYSTEMS ME, V7, P220
Dou Z., 1995, FUZZY LOGIC CONTROL
SABOURIN C, 2005, ROBOTICS AUTONOMOUS, V38, P81
Yan Pingfan, 1998, NEURAL NETWORK FUZZY
ZHAO H, 2006, P 11 INT S ART LIF R, P248
NR 5
TC 8
Z9 9
U1 0
U2 6
PU ELSEVIER SCIENCE INC
PI NEW YORK
PA 360 PARK AVE SOUTH, NEW YORK, NY 10010-1710 USA
SN 0096-3003
J9 APPL MATH COMPUT
JI Appl. Math. Comput.
PD SEP 1
PY 2008
VL 203
IS 1
BP 457
EP 462
DI 10.1016/j.amc.2008.05.021
PG 6
WC Mathematics, Applied
SC Mathematics
GA 343DL
UT WOS:000258832700054
DA 2018-01-22
ER

PT J
AU Mayer, NM
Asada, M
AF Mayer, N. Michael
Asada, Minoru
TI ROBOCUP HUMANOID CHALLENGE
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article; Proceedings Paper
CT 1st Workshop on Humanoid Soccer Robots
CY DEC 04, 2006
CL Genoa, ITALY
DE History of RoboCup Soccer Humanoid League; current challenges of RoboCup
Soccer Humanoid League
ID SOCCER WORLD CHAMPIONSHIPS
AB We describe here the humanoid challenge that is part of the RoboCup robot soccer
competitions. We focus on how relevant research issues of humanoid robotics - for
example, biped walking and human-like sensors and actuators - can be addressed, and
we investigate how teams proceed to solve the given tasks. Thus, new technologies
like artificial muscles and artificial skin might find their way into the
competition very soon. We detail examples of these technologies and discuss in
which way they may contribute to RoboCup, and in return how RoboCup may serve as a
benchmark for achievements within these technologies. Further we describe how
RoboCup works as an open, worldwide cooperative project in robotics and AI.
C1 [Mayer, N. Michael] Osaka Univ, Grad Sch Engn, Dept Adapt Machine Syst, Asada
Synergist Intelligence Project,ERATO JST, Suita, Osaka 5650871, Japan.
Osaka Univ, Grad Sch Engn, Dept Adapt Machine Syst, Emergent Robot Area, Suita,
Osaka 5650871, Japan.
RP Mayer, NM (reprint author), Osaka Univ, Grad Sch Engn, Dept Adapt Machine Syst,
Asada Synergist Intelligence Project,ERATO JST, 2-2 Yamada Oka, Suita, Osaka
5650871, Japan.
EM michael@jeap.org; asada@jeap.org
RI Mayer, Norbert Michael/F-7958-2010
CR Asada M, 1999, ARTIF INTELL, V110, P193, DOI 10.1016/S0004-3702(99)00024-7
Asada M., 2003, AI Magazine, V24, P21
Asada M, 2000, AI MAG, V21, P9
ASADA M, 2006, 5 INT C DEV LEARN IC
ASADA M, 1999, LECT NOTE ARTIFICIAL, V1604
BIRK A, 2002, LECT NOTE ARTIFICIAL, V2377
BOEDECKER J, 2005, P 3 INT S AUT MIN RE
BONARINI BBA, 2004, LECT NOTE ARTIFICIAL, V3020
CORADESCHI S, 2000, AI MAG, V21, P11
FABER F, 2007, P IEEE RAS 7 INT C H
HASEGAWA Y, 2007, ISEF 2007
Hosoda K, 2006, ROBOT AUTON SYST, V54, P104, DOI 10.1016/j.robot.2005.09.019
HOSODA K, 2007, P IEEE RSJ INT C INT
KAMINKA G, 2003, LECT NOTE ARTIFICIAL, V2752
Kitano H, 2000, ADV ROBOTICS, V13, P723
Kitano H, 1997, AI MAG, V18, P73
KITANO H, 1998, LECT NOTE ARTIFICIAL, V1395
KRATZ R, 2006, P 4 IFAC S MACH SYST
MATSUMURA R, 2007, ROBOCUP 2007 S PAP T
MAYER NM, 2006, ROBOCUP 2006 S PAP T
MAYER NM, 2007, ROBOCUP S P CD ROM
MAYER NM, 2006, ROBOCUP 2006 ROBOT S, V10, P25
MISHIMA M, 2007, P 6 INT S LIN DRIV I
Noda I., 1998, AI Magazine, V19, P49
Ohmura Y, 2006, IEEE INT CONF ROBOT, P1348, DOI 10.1109/ROBOT.2006.1641896
SAMMUT C, 2005, LECT NOTE ARTIFICIAL, V3276
Stone P, 2001, AI MAG, V22, P11
STONE P, 2001, LECT NOTE ARTIFICIAL, V2019
Veloso M, 2002, AI MAG, V23, P55
VELOSO M, 2000, LECT NOTE ARTIFICIAL, V1856
Yoshikawa Y, 2003, CONNECT SCI, V15, P245, DOI 10.1080/09540090310001655075
Zhou CJ, 2004, ADV ROBOTICS, V18, P721, DOI 10.1163/1568553041719474
NR 32
TC 3
Z9 3
U1 0
U2 3
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD SEP
PY 2008
VL 5
IS 3
BP 335
EP 351
DI 10.1142/S0219843608001455
PG 17
WC Robotics
SC Robotics
GA 392FW
UT WOS:000262289300002
DA 2018-01-22
ER

PT J
AU Matsumura, R
Ishiguro, H
AF Matsumura, Reo
Ishiguro, Hiroshi
TI DEVELOPMENT OF A HIGH-PERFORMANCE HUMANOID SOCCER ROBOT
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
CT 1st Workshop on Humanoid Soccer Robots
CY DEC 04, 2006
CL Genoa, ITALY
DE Humanoid soccer robot; RoboCup; Team Osaka; design policy
AB The RoboCup, which is a worldwide robot soccer competition, has set an ambitious
goal for itself: to have a humanoid robot team win against human teams in World Cup
Soccer by 2050. In order to achieve this goal, the robots require highly
sophisticated sensory-data processing and decision-making functions. The
development of robots for the RoboCup Humanoid League also has significant meaning
for the development of robotics. However, this development is not easy and there
are few papers covering it and its design policy. This paper reports the design
policy for humanoids developed by Team Osaka, whose robots have been selected as
the best humanoid robots four times in the last four years. In addition to the
design policy, this paper also reports on the developmental process and comparisons
among humanoid versions developed by Team Osaka. We believe that this paper will
offer much information to other researchers who are developing humanoids for the
RoboCup.
C1 [Matsumura, Reo; Ishiguro, Hiroshi] Osaka Univ, Team Osaka, Dept Adapt Machine
Syst, Suita, Osaka 5650871, Japan.
RP Matsumura, R (reprint author), Osaka Univ, Team Osaka, Dept Adapt Machine Syst,
2-2 Yamadaoka, Suita, Osaka 5650871, Japan.
EM reo.matsumura@ed.ams.eng.osaka-u.ac.jp;
ishiguro@ed.ams.eng.osaka-u.ac.jp
CR FRIEDMANN M, 2006, P WORKSH HUM SOCC RO, P9
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
HIRUKAWA H, 2003, P IEEE RAS INT C HUM
MAYER NM, 2007, ROBOCUP S 2007 CD RO
Sakagami Y., 2002, IEEE RSJ INT C INT R, P2478, DOI DOI
10.1109/IRDS.2002.1041641
NR 5
TC 4
Z9 4
U1 0
U2 3
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD SEP
PY 2008
VL 5
IS 3
BP 353
EP 373
DI 10.1142/S0219843608001467
PG 21
WC Robotics
SC Robotics
GA 392FW
UT WOS:000262289300003
DA 2018-01-22
ER

PT J
AU Friedmann, M
Kiener, J
Petters, S
Thomas, D
Von Stryk, O
Sakamoto, H
AF Friedmann, Martin
Kiener, Jutta
Petters, Sebastian
Thomas, Dirk
Von Stryk, Oskar
Sakamoto, Hajime
TI VERSATILE, HIGH-QUALITY MOTIONS AND BEHAVIOR CONTROL OF A HUMANOID
SOCCER ROBOT
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article; Proceedings Paper
CT 1st Workshop on Humanoid Soccer Robots
CY DEC 04, 2006
CL Genoa, ITALY
DE Humanoid soccer robots; humanoid walking optimization; behavior control.
AB Autonomous soccer games represent an extraordinary challenge for autonomous
humanoid robots which must act fast and stable while carrying all needed onboard
computers, sensors and batteries. In this paper, the development and system
integration of hardware and software modules of the 55-cm tall, autonomous humanoid
soccer robot Bruno is described to cope with this challenge. Although based on a
"minimalistic" design which only uses gyroscopes in the hip but not foot-ground
contact sensors for control of balance, versatile and high-quality walking motions
have been developed. Fast forward walking of about 1.5km/h has been obtained using
an efficient sequential surrogate optimization method and walking through uneven
terrain with a newly designed passively compliant foot sole. Further modules of the
software and control architecture which are needed for an adaptive selection of
different motions and autonomous robot behavior are briefly described. Experimental
results are reported, which have been obtained under the conditions of a live
competition. The robot's hardware is mainly based on standard components which can
therefore be easily adapted by new designers, as no comparable, standard humanoid
robot platforms are available.
C1 [Friedmann, Martin; Kiener, Jutta; Petters, Sebastian; Thomas, Dirk; Von Stryk,
Oskar] Tech Univ Darmstadt, Syst Optimizat & Robot Grp, D-64289 Darmstadt, Germany.
[Sakamoto, Hajime] Hajime Res Inst Ltd, Nishiyodogawa Ku, Osaka 5550043, Japan.
RP Friedmann, M (reprint author), Tech Univ Darmstadt, Syst Optimizat & Robot Grp,
Hochschulstr 10, D-64289 Darmstadt, Germany.
EM friedmann@sim.tu-darmstadt.de; kiener@sim.tu-darmstadt.de;
petters@sim.tu-darmstadt.de; dthomas@sim.tu-darmstadt.de;
stryk@sim.tu-darmstadt.de; sakamoto@hajimerobot.co.jp
CR Adams B, 2000, IEEE INTELL SYST APP, V15, P25, DOI 10.1109/5254.867909
Chevallereau C, 2001, ROBOTICA, V19, P557
FABER F, 2007, P 7 IEEE RAS INT C H
Fox D, 1999, SIXTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-
99)/ELEVENTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE (IAAI-99), P343
FRIEDMANN M, 2006, P IFAC S MULT SYST S, P51
Hardt M, 2003, Z ANGEW MATH MECH, V83, P648, DOI 10.1002/zamm.200310068
Hemker T., 2006, CLAWAR 2006, P614
HIROSE M, 2001, IEEE RSJ INT C INT R, V2
HU L, 2006, P IEEE INT C INT ROB, P362
Kanehiro F, 2003, IEEE INT CONF ROBOT, P1633, DOI 10.1109/ROBOT.2003.1241828
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Khatib O., 2004, INT J HUM ROBOT, V1, P29, DOI [10.1142/S0219843604000058, DOI
10.1142/S0219843604000058]
KIM JY, 2005, IEEE INT C ROB AUT I, P1443
KRATZ R, 2005, COMMUNICATION
LOEFFLER K, 2003, ICRA 2003, P484
LOETZSCH M, 2006, IEEE RSJ INT C INT R, P5124, DOI DOI 10.1109/IROS.2006.282605
NAGASAKA K, 2004, IEEE INT C ROB AUT I, V4, P3189
NAKAOKA S, 2003, IEEE INT C ROB AUT I, V3, P3905
NISHIWAKI K, 2007, P 7 IEEE RAS INT C H
NISHIWAKI K, 2004, IEEE RAS INT C HUM R, V2, P672
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
WOLLHERR D, 2002, IEEE INT C INT ROB S, V3, P2491
NR 22
TC 8
Z9 9
U1 0
U2 1
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD SEP
PY 2008
VL 5
IS 3
BP 417
EP 436
DI 10.1142/S0219843608001509

PG 20
WC Robotics
SC Robotics
GA 392FW
UT WOS:000262289300006
DA 2018-01-22
ER

PT J
AU Nenchev, DN
Nishio, A
AF Nenchev, Dragomir N.
Nishio, Akinori
TI Ankle and hip strategies for balance recovery of a biped subjected to an
impact
SO ROBOTICA
LA English
DT Article
DE biped robot; external disturbance; balance recovery; reaction null-space
method
ID POSTURAL MOVEMENTS
AB A humanoid robot should be able to keep balance even in the presence of
disturbing forces. Studies of human body reaction patterns to sudden external
forces (impacts) are useful for developing balance control strategies. In this
paper, we show how to implement two such reaction patterns, called ankle and hip
strategy, using a small humanoid robot. Simple dynamical models in the sagittal
plane are employed. The decision for invoking one of the reaction patterns is based
on acceleration data measured during the impact. The experiments confirm that the
robot is able to react swiftly, similar to a human.
C1 [Nenchev, Dragomir N.; Nishio, Akinori] Musashi Inst Technol, Dept Mech Syst
Engn, Tokyo 1588557, Japan.
RP Nenchev, DN (reprint author), Musashi Inst Technol, Dept Mech Syst Engn, Tokyo
1588557, Japan.
EM nenchev@sc.musashi-tech.ac.jp
CR Abdallah M., 2005, P 2005 IEEE INT C RO
Azevedo C, 2004, ROBOT AUTON SYST, V47, P203, DOI 10.1016/j.robot.2004.03.013
*FUJ AUT CO LTD, 2004, MIN HUM ROB HOAP 2
Fujiwara K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2521, DOI 10.1109/IRDS.2002.1041648
Gorce P, 1999, IEEE T SYST MAN CY A, V29, P616, DOI 10.1109/3468.798065
Guihard M, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2587, DOI 10.1109/IRDS.2002.1041660
Gutmann J.-S., 2004, P 2004 IEEE INT C IN, P1407
HARADA K, 2005, P IEEE INT C ROB AUT, P1712
Harada K., 2004, P IEEE RSJ INT C INT, P1167
HORAK FB, 1986, J NEUROPHYSIOL, V55, P1369
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Kajita S., 2006, P IEEE RSJ INT C INT, P2993
KANEKO K, 2006, P 2006 IEEE INT C IN, P5496
KUO AD, 1995, IEEE T BIO-MED ENG, V42, P87, DOI 10.1109/10.362914
MORISAWA M, 2006, P 2006 IEEE INT C IN, P2986
NASHNER LM, 1985, BEHAV BRAIN SCI, V8, P135, DOI 10.1017/S0140525X00020008
NENCHEV D, 1992, IEEE T ROBOTIC AUTOM, V8, P1, DOI 10.1109/70.127234
Nenchev DN, 1999, IEEE T ROBOTIC AUTOM, V15, P548, DOI 10.1109/70.768186
Nenchev DN, 1999, IEEE T ROBOTIC AUTOM, V15, P1011, DOI 10.1109/70.817666
NISHIO A, 2006, P 2006 IEEE INT C IN, P1996
SHUMWAY-COOK A, 1989, Seminars in Hearing, V10, P196
Sugihara T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1404, DOI 10.1109/ROBOT.2002.1014740
Tanaka T., 2006, P 2006 IEEE INT C IN, P3970
TORRES MA, 1993, PROCEEDINGS : IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-3, P812, DOI 10.1109/ROBOT.1993.292077
VAFA Z, 1987, THESIS MIT
NR 25
TC 36
Z9 36
U1 1
U2 9
PU CAMBRIDGE UNIV PRESS
PI NEW YORK
PA 32 AVENUE OF THE AMERICAS, NEW YORK, NY 10013-2473 USA
SN 0263-5747
EI 1469-8668
J9 ROBOTICA
JI Robotica
PD SEP-OCT
PY 2008
VL 26
BP 643
EP 653
DI 10.1017/S0263574708004268
PN 5
PG 11
WC Robotics
SC Robotics
GA 354EI
UT WOS:000259621300008
DA 2018-01-22
ER

PT J
AU Mitsunaga, N
Smith, C
Kanda, T
Ishiguro, H
Hagita, N
AF Mitsunaga, Noriaki
Smith, Christian
Kanda, Takayuki
Ishiguro, Hiroshi
Hagita, Norihiro
TI Adapting robot behavior for human-robot interaction
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE behavior adaptation; human-robot interactions; policy gradient
reinforcement learning (PGRL); proxemics
AB Human beings subconsciously adapt their behaviors to a communication partner in
order to make interactions run smoothly. In human-robot interactions, not only the
human but also the robot is expected to adapt to its partner. Thus, to facilitate
human-robot interactions, a robot should be able to read subconscious comfort and
discomfort signals from humans and adjust its behavior accordingly, just like a
human would. However, most previous, research works expected the human to
consciously give feedback, which might interfere with the aim of interaction. We
propose an adaptation mechanism based on reinforcement learning that reads
subconscious body signals from a human partner, and uses this information to adjust
interaction distances, gaze meeting, and motion speed and timing in human-robot
interactions. The mechanism uses gazing at the robot's face and human movement
distance as subconscious body signals that indicate a human's comfort and
discomfort. A pilot study with a humanoid robot that has ten interaction behaviors
has been conducted. The study result of 12 subjects suggests that the proposed
mechanism enables autonomous adaptation to individual preferences. Also, detailed
discussion and conclusions are presented.
C1 [Mitsunaga, Noriaki] Kanazawa Inst Technol, Dept Robot, Kanazawa, Ishikawa
9218501, Japan.
[Mitsunaga, Noriaki; Kanda, Takayuki; Ishiguro, Hiroshi; Hagita, Norihiro]
Intelligent Robot & Commun Lab, ATR, Kyoto 6190288, Japan.
[Smith, Christian] Royal Inst Technol, Sch Comp Sci & Commun, SE-10044
Stockholm, Sweden.
[Ishiguro, Hiroshi] Osaka Univ, Grad Sch Engn, Osaka 5650871, Japan.
RP Mitsunaga, N (reprint author), Kanazawa Inst Technol, Dept Robot, Kanazawa,
Ishikawa 9218501, Japan.
EM mitunaga@neptune.kanazawa-it.ac.jp; ccs@nada.kth.se; kanda@atr.jp;
ishiguro@ams.eng.osaka-u.ac.jp; hagita@atr.jp
RI Kanda, Takayuki/I-5843-2016; Smith, Christian/C-5228-2013
OI Kanda, Takayuki/0000-0002-9546-5825; Smith,
Christian/0000-0003-2078-8854
CR Baxter J, 2001, J ARTIF INTELL RES, V15, P319
Duncan S. D., 1977, FACE TO FACE INTERAC
HALL Edward T., 1966, HIDDEN DIMENSION
INAMURA T, P 1999 INT C ADV ROB, P523
Isbell C. L. Jr., 2001, Proceedings of the Fifth International Conference on
Autonomous Agents, P377, DOI 10.1145/375735.376334
Kanda T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1848, DOI 10.1109/ROBOT.2002.1014810
Kanda T., 2003, P INT JOINT C ART IN, P177
Kohl N, 2004, PROCEEDING OF THE NINETEENTH NATIONAL CONFERENCE ON ARTIFICIAL
INTELLIGENCE AND THE SIXTEENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL
INTELLIGENCE, P611
KOHL N, 2004, P IEEE INT C ROB AUT, V3, P2619
Nakamura Y, 1999, NEUROSCI RES, V35, P95, DOI 10.1016/S0168-0102(99)00071-1
Nakauchi Y, 2002, AUTON ROBOT, V12, P313, DOI 10.1023/A:1015273816637
Nass C., 1996, MEDIA EQUATION PEOPL
SUNDSTROM E, 1976, HUM ECOL, V4, P47, DOI 10.1007/BF01531456
Sutton RS, 2000, ADV NEUR IN, V12, P1057
TASAKI T, P INT WORKSH ROB HUM, P81
Tickle-Degnen L, 1990, PSYCHOL INQ, V1, P285, DOI DOI 10.1207/S15327965PLI0104_
WATKINS CJCH, 1992, MACH LEARN, V8, P279, DOI 10.1023/A:1022676722315
NR 17
TC 30
Z9 31
U1 0
U2 10
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD AUG
PY 2008
VL 24
IS 4
BP 911
EP 916
DI 10.1109/TRO.2008.926867
PG 6
WC Robotics
SC Robotics
GA 340AH
UT WOS:000258617900014
DA 2018-01-22
ER

PT J
AU Guan, Y
Yokoi, K
Zhang, X
AF Guan, Yisheng
Yokoi, Kazuhito
Zhang, Xianmin
TI Numerical methods for reachable space generation of humanoid robots
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE humanoid robot; humanoid manipulation; reachable boundary; workspace;
Monte Carlo method
ID PARALLEL MANIPULATORS; WORKSPACE GENERATION
AB In view of the importance of workspace to robotic design, motion planning and
control, we study humanoid reachable spaces. Due to the large number of degrees of
freedom, the complexity and special characteristics of humanoid robots that
conventional robots do not possess, it would be very difficult or impractical to
use analytical or geometric methods to analyze and obtain humanoid reachable
spaces. In this paper, we develop two numerical approaches - the optimization-based
method and the Monte Carlo method - to generate the reachable space of a humanoid
robot. We first formulate the basic constraints (including kinematic constraint and
balance constraint) that a humanoid robot must satisfy in manipulation tasks. We
then use optimization techniques to build mathematical models for boundary points
by which the reachable boundary is formed. This method gives rise to an
approximation of the reachable space with accurate boundary points. On the other
hand, the Monte Carlo method is relatively simple and more suitable for the
visualization of robotic workspace. To utilize the numerical results by the Monte
Carlo method, we propose an approach to build a database. We present the algorithms
with these two methods and provide illustrating examples conducted on the humanoid
HRP-2.
C1 [Guan, Yisheng; Yokoi, Kazuhito] Natl Inst Adv Ind Sci & Technol, JRL,
Intelligent Syst Res Inst, Tsukuba, Ibaraki 3058568, Japan.
[Zhang, Xianmin] S China Univ Technol, Sch Mech Engn, Guangzhou 510640,
Guangdong, Peoples R China.
RP Guan, Y (reprint author), Natl Inst Adv Ind Sci & Technol, JRL, Intelligent Syst
Res Inst, AIST Cent 2,1-1-1 Umezono, Tsukuba, Ibaraki 3058568, Japan.
EM ysguan@scut.edu.cn; hazuhito.yokoi@dist.go.jp; zhangxm@scut.edu.cn
RI Yokoi, Kazuhito/K-2046-2012
OI Yokoi, Kazuhito/0000-0003-3942-2027
FU Japan Society for the Promotion of Science (JSPS)
FX The main work of this paper was performed with JRL, Intelligent Systems
Research Institute, AIST at Tsukuba, Japan. It was supported in part by
a fellowship and a research grant of Japan Society for the Promotion of
Science (JSPS). Part of the work has been published in the Proceedings
of IEEE ICRA2006 and IEEE/RSJ IROS2006.
CR ALCIATORE D, 1994, ROBOTICS KINEMATICS
Chirikjian GS, 1998, IEEE T ROBOTIC AUTOM, V14, P123, DOI 10.1109/70.660856
GOSSELIN C, 1991, J MECH DESIGN, V113, P220, DOI 10.1115/1.2912772
GUAN Y, 2003, P IEEE RSJ INT C INT, P3705, DOI DOI 10.1109/IROS.2003.1249731
GUPTA KC, 1986, INT J ROBOT RES, V5, P112, DOI 10.1177/027836498600500212
Kanehiro F, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P24, DOI 10.1109/ROBOT.2002.1013334
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
KERR J, 1986, INT J ROBOT RES, V4, P3, DOI 10.1177/027836498600400401
Kim DI, 1997, IEEE INT CONF ROBOT, P2986, DOI 10.1109/ROBOT.1997.606741
KOHLI D, 1985, J MECH TRANSM-T ASME, V107, P209, DOI 10.1115/1.3258710
Kuffner J, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P2265, DOI 10.1109/ROBOT.2002.1013569
KUMAR V, 1992, J MECH DESIGN, V114, P368, DOI 10.1115/1.2926562
LAI ZC, 1988, IEEE T ROBOTIC AUTOM, V4, P99, DOI 10.1109/56.778
Merlet JP, 1998, MECH MACH THEORY, V33, P7, DOI 10.1016/S0094-114X(97)00025-6
Monsarrat B, 2003, IEEE T ROBOTIC AUTOM, V19, P954, DOI 10.1109/TRA.2003.819603
NEO ES, 2002, P 20 NAT C ROB SOC J
Pinter J.D., 1996, GLOBAL OPTIMIZATION
Rastegar J, 1988, ASME TRENDS DEV MECH, P299
VAFA Z, 1990, INT J ROBOT RES, V9, P3, DOI 10.1177/027836499000900401
Wang YF, 2004, IEEE T ROBOTIC AUTOM, V20, P399, DOI 10.1109/TRA.2004.825473
Yang CJ, 2006, J VISUAL-JAPAN, V9, P13, DOI 10.1007/BF03181564
Yang J, 2005, INT J ROBOT AUTOM, V20, P240
ZHANG H, 1993, IEEE T SYST MAN CYB, V23, P324, DOI 10.1109/21.214794
NR 23
TC 22
Z9 23
U1 1
U2 13
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD AUG
PY 2008
VL 27
IS 8
BP 935
EP 950
DI 10.1177/0278364908095142
PG 16
WC Robotics
SC Robotics
GA 338XE
UT WOS:000258541900004
DA 2018-01-22
ER

PT J
AU Kulic, D
Takano, W
Nakamura, Y
AF Kulic, Dana
Takano, Wataru
Nakamura, Yoshihiko
TI Incremental learning, clustering and hierarchy formation of whole body
motion patterns using adaptive hidden Markov chains
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE learning and adaptive systems; cognitive robotics; gesture; posture;
social spaces and facial expressions; human-centred and life-like
robotics; humanoid robots; recognition; sensing and perception; computer
vision
ID HUMAN-PERFORMANCE; ROBOT; MODELS; IMITATION; TASK; RECOGNITION;
ADAPTATION; SYSTEMS
AB This paper describes a novel approach for autonomous and incremental learning of
motion pattern primitives by observation of human motion. Human motion patterns are
abstracted into a dynamic stochastic model, which can be used for both subsequent
motion recognition and generation, analogous to the mirror neuron hypothesis in
primates. The model size is adaptable based on the discrimination requirements in
the associated region of the current knowledge base. A new algorithm for
sequentially training the Markov chains is developed, to reduce the computation
cost during model adaptation. As new motion patterns are observed, they are
incrementally grouped together using hierarchical agglomerative clustering based on
their relative distance in the model space. The clustering algorithm forms a tree
structure, with specialized motions at the tree leaves, and generalized motions
closer to the root. The generated tree structure will depend on the type of
training data provided, so that the most specialized motions will be those for
which the most training has been received. Tests with motion capture data for a
variety of motion primitives demonstrate the efficacy of the algorithm.
C1 [Kulic, Dana; Takano, Wataru; Nakamura, Yoshihiko] Univ Tokyo, Dept
Mechanoinformat, Bunkyo Ku, Tokyo, Japan.
RP Kulic, D (reprint author), Univ Tokyo, Dept Mechanoinformat, Bunkyo Ku, 7-3-1
Hongo, Tokyo, Japan.
EM dana@ynl.t.u-tokoyo.ac.jp; takano@ynl.t.u-tokoyo.ac.jp;
nakamura@ynl.t.u-tokoyo.ac.jp
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102; Kulic, Dana/0000-0002-4169-2141
CR Bennewitz M, 2005, INT J ROBOT RES, V24, P31, DOI 10.1177/0278364904048962
Bentivegna D. C., 2006, P 2006 IEEE RSJ INT, P2677
Bernardin K, 2005, IEEE T ROBOT, V21, P47, DOI 10.1109/TRO.2004.833816
BETKOWSKA A, 2006, P S LARG SCAL KNOWL, P129
Billard AG, 2006, ROBOT AUTON SYST, V54, P370, DOI 10.1016/j.robot.2006.01.007
Breazeal C, 2002, TRENDS COGN SCI, V6, P481, DOI 10.1016/S1364-6613(02)02016-8
Calinon Sylvain, 2007, P ACM IEEE INT C HUM, P255, DOI DOI
10.1145/1228716.1228751
Calinon S., 2007, P IEEE INT C ROB HUM, P702
Calinon S, 2007, IEEE T SYST MAN CY B, V37, P286, DOI 10.1109/TSMCB.2006.886952
Dillmann R, 2004, ROBOT AUTON SYST, V47, P109, DOI 10.1016/j.robot.2004.03.005
Dillmann R., 1999, P INT S ROB RES, P229
Dixon KR, 2004, INT J ROBOT RES, V23, P955, DOI 10.1177/0278364904044401
Ekvall S, 2006, IEEE T ROBOT, V22, P1029, DOI 10.1109/TRO.2006.878976
EZAKI H, 2000, THESIS U TOKYO
Ghahramani Z, 1997, MACH LEARN, V29, P245, DOI 10.1023/A:1007425814087
Ho MAT, 2005, IEEE T ROBOT, V21, P497, DOI 10.1109/TRO.2004.840912
Iba S, 2005, INT J ROBOT RES, V24, P83, DOI 10.1177/0278364904049250
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Jacobs RA, 2002, NEURAL COMPUT, V14, P2415, DOI 10.1162/08997660260293283
Jain AK, 1999, ACM COMPUT SURV, V31, P264, DOI 10.1145/331499.331504
Kadone H., 2006, P IEEE RAS INT C HUM, P1
Kadone H., 2005, P 2005 IEEE RSJ INT, P2900
Kragic D, 2005, INT J ROBOT RES, V24, P731, DOI 10.1177/0278364905057059
Kulic D, 2007, P IEEE INT C INT ROB, P2388
KULIC D, 2007, P IEEE INT S ROB HUM, P1016
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
Kuniyoshi Y., 1989, P INT S IND ROB, P119
Lee D., 2008, P IEEE INT C ROB AUT, P1722
LEE D, 2005, P INT C INT ROB SYST, P1911
LIU S, 1992, J DYN SYST-T ASME, V114, P220, DOI 10.1115/1.2896518
MURPHY KP, 1999, ADV NEURAL INFROM PR
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
NICOLESCU MN, 2005, IMITATION SOCIAL LEA
Ogata T, 2005, ADV ROBOTICS, V19, P651, DOI 10.1163/1568553054255655
OHKAWA K, 2005, P JSME C ROB MECH
RABINER LR, 1989, P IEEE, V77, P257, DOI 10.1109/5.18626
Rizzolatti G, 2001, NAT REV NEUROSCI, V2, P661, DOI 10.1038/35090060
Rizzolatti G, 2004, ANNU REV NEUROSCI, V27, P169, DOI
10.1146/annurev.neuro.27.070203.144230
Schaal S, 2003, PHILOS T R SOC B, V358, P537, DOI 10.1098/rstb.2002.1258
STARNER T, 1995, P INT WORKSH AUT FAC, P189
TAKANO W, 2006, THESIS U TOKYO
TAKANO W, 2006, P IEEE RAS INT C HUM, P425
Takano W, 2006, IEEE INT CONF ROBOT, P3602, DOI 10.1109/ROBOT.2006.1642252
Taylor G. W., 2006, P C NEUR INF PROC SY, P1345
Yang J, 1997, IEEE T SYST MAN CY A, V27, P34, DOI 10.1109/3468.553220
NR 45
TC 101
Z9 101
U1 3
U2 18
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD JUL
PY 2008
VL 27
IS 7
BP 761
EP 784
DI 10.1177/0278364908091153
PG 24
WC Robotics
SC Robotics
GA 320QP
UT WOS:000257250400001
DA 2018-01-22
ER

PT J
AU Kawato, M
AF Kawato, Mitsuo
TI From 'Understanding the Brain by Creating the Brain' towards
manipulative neuroscience
SO PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES
LA English
DT Article
DE computational neuroscience; creating the brain; humanoid robot;
cerebellar learning; long-term depression; manipulative neuroscience
ID OCULAR FOLLOWING RESPONSES; CEREBELLAR VENTRAL PARAFLOCCULUS; TEMPORAL
FIRING PATTERNS; LONG-TERM DEPRESSION; PURKINJE-CELLS; HUMANOID ROBOTS;
BIPED LOCOMOTION; CAUDATE-NUCLEUS; INTERNAL-MODELS; NEURAL-NETWORK
AB Ten years have passed since the Japanese 'Century of the Brain' was promoted,
and its most notable objective, the unique 'creating the brain' approach, has led
us to apply a humanoid robot as a neuroscience tool. Here, we aim to understand the
brain to the extent that we can make humanoid robots solve tasks typically solved
by the human brain by essentially the same principles. I postulate that this
'Understanding the Brain by Creating the Brain' approach is the only way to fully
understand neural mechanisms in a rigorous sense. Several humanoid robots and their
demonstrations are introduced. A theory of cerebellar internal models and a systems
biology model of cerebellar synaptic plasticity is discussed. Both models are
experimentally supported, but the latter is more easily verifiable while the former
is still controversial. I argue that the major reason for this difference is that
essential information can be experimentally manipulated in molecular and cellular
neuroscience while it cannot be manipulated at the system level. I propose a new
experimental paradigm, manipulative neuroscience, to overcome this difficulty and
allow us to prove cause-and-effect relationships even at the system level.
C1 [Kawato, Mitsuo] ATR Computat Neurosci Labs, Kyoto 6190288, Japan.
[Kawato, Mitsuo] JST, ICORP Computat Brain Project, Kyoto 6190288, Japan.
RP Kawato, M (reprint author), ATR Computat Neurosci Labs, Kyoto 6190288, Japan.
EM kawato@atr.jp
CR ALBUS J S, 1971, Mathematical Biosciences, V10, P25, DOI 10.1016/0025-
5564(71)90051-4
AMARI S, 2006, KAGAKU, V76, P433
Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
Bentivegna DC, 2004, ROBOT AUTON SYST, V47, P163, DOI
10.1016/j.robot.2004.03.010
BENTIVEGNA DC, 2004, INT J HUM ROBOT, V1, P585, DOI DOI
10.1142/S0219843604000307
Boyden ES, 2005, NAT NEUROSCI, V8, P1263, DOI 10.1038/nn1525
Chadderton P, 2004, NATURE, V428, P856, DOI 10.1038/nature02442
DAYAN P, 1999, THEORETICAL NEUROSCI
Dean P, 2002, P ROY SOC B-BIOL SCI, V269, P1895, DOI 10.1098/rspb.2002.2103
DESCHUTTER E, 1995, TRENDS NEUROSCI, V18, P291, DOI 10.1016/0166-2236(95)93916-L
Doi T, 2005, J NEUROSCI, V25, P950, DOI 10.1523/JNEUROSCI.2727-04.2005
ENDO G, 2005, 20 NAT C ART INT AAA, P1237
Gomi H, 1998, J NEUROPHYSIOL, V80, P818
Hale JG, 2005, IEEE T SYST MAN CY C, V35, P512, DOI 10.1109/TSMCC.2004.840063
Haruno M, 2006, J NEUROPHYSIOL, V95, P948, DOI 10.1152/jn.00382.2005
Haruno M, 2004, J NEUROSCI, V24, P1660, DOI 10.1523/JNEUROSCI.3417-03.2004
Haruno M, 2006, NEURAL NETWORKS, V19, P1242, DOI 10.1016/j.neunet.2006.06.007
Ijspeert AJ, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS I-IV, PROCEEDINGS, P1398, DOI 10.1109/ROBOT.2002.1014739
Imamizu H, 2000, NATURE, V403, P192, DOI 10.1038/35003194
Imamizu H, 2003, P NATL ACAD SCI USA, V100, P5461, DOI 10.1073/pnas.0835746100
ITO M, 1970, INT J NEUROL, V7, P162
JORDAN MI, 1992, COGNITIVE SCI, V16, P307, DOI 10.1016/0364-0213(92)90036-T
Kamitani Y, 2006, CURR BIOL, V16, P1096, DOI 10.1016/j.cub.2006.04.003
Kamitani Y, 2005, NAT NEUROSCI, V8, P679, DOI 10.1038/nn1444
Kawagoe A, 1999, IEEE T APPL SUPERCON, V9, P727, DOI 10.1109/77.783398
Kawano K, 1999, CURR OPIN NEUROBIOL, V9, P467, DOI 10.1016/S0959-4388(99)80070-1
KAWATO M, 1987, BIOL CYBERN, V57, P169, DOI 10.1007/BF00364149
KAWATO M, 1990, ADVANCED NEURAL COMPUTERS, P365
KAWATO M, 1992, BIOL CYBERN, V68, P95, DOI 10.1007/BF00201431
Kobayashi Y, 1998, J NEUROPHYSIOL, V80, P832
Kuroda S, 2001, J NEUROSCI, V21, P5693
Llinas R, 1997, LEARN MEMORY, V3, P445, DOI 10.1101/lm.3.6.445
MARR D, 1969, J PHYSIOL-LONDON, V202, P437, DOI 10.1113/jphysiol.1969.sp008820
Matsubara T, 2006, ROBOT AUTON SYST, V54, P911, DOI 10.1016/j.robot.2006.05.012
Mitsuo K, 2003, PROG BRAIN RES, V142, P171
MIYAMOTO H, 1988, NEURAL NETWORKS, V1, P251, DOI 10.1016/0893-6080(88)90030-5
Miyamoto H, 1996, NEURAL NETWORKS, V9, P1281, DOI 10.1016/S0893-6080(96)00043-3
Morimoto J, 2006, IEEE INT CONF ROBOT, P1579, DOI 10.1109/ROBOT.2006.1641932
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Nakanishi J, 2004, NEURAL NETWORKS, V17, P1453, DOI 10.1016/j.neunet.2004.05.003
Ogasawara H, 2007, PLOS COMPUT BIOL, V3, P49, DOI 10.1371/journal.pcbi.0020179
Okubo Y, 2004, J NEUROSCI, V24, P9513, DOI 10.1523/JNEUROSCI.1829-04.2004
Pasalar S, 2006, NAT NEUROSCI, V9, P1404, DOI 10.1038/nn1783
POLLARD NS, 2002, P IEEE INT C ROB AUT, DOI DOI 10.1109/ROBOT.2002.1014737
RILEY M, 2000, P 2000 WORKSH INT RO, P35
Samejima K, 2005, SCIENCE, V310, P1337, DOI 10.1126/science.1115270
Sato M, 2004, NEUROIMAGE, V23, P806, DOI 10.1016/j.neuroimage.2004.06.037
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Schaal S, 1998, NEURAL COMPUT, V10, P2047, DOI 10.1162/089976698300016963
SCHAAL S, 2003, IEEE RSJ INT C INT R, P39
Shibata T, 2005, NEURAL NETWORKS, V18, P213, DOI 10.1016/j.neunet.2005.01.001
Shibata T, 2001, NEURAL NETWORKS, V14, P201, DOI 10.1016/S0893-6080(00)00084-8
SHIDARA M, 1993, NATURE, V365, P50, DOI 10.1038/365050a0
Takemura A, 2001, J NEUROPHYSIOL, V86, P1750
Tanaka K, 2007, NEURON, V54, P787, DOI 10.1016/j.neuron.2007.05.014
Tanaka SC, 2004, NAT NEUROSCI, V7, P887, DOI 10.1038/nn1279
TSUKAHARA N, 1982, COMPETITION COOPERAT, V45, P430
Ude A, 2004, ROBOT AUTON SYST, V47, P93, DOI 10.1016/j.robot.2004.03.004
Ude A, 2003, ADV ROBOTICS, V17, P165, DOI 10.1163/156855303321165114
Winkelman B, 2006, J NEUROPHYSIOL, V95, P2342, DOI 10.1152/jn.01191.2005
Yamamoto K, 2002, J NEUROPHYSIOL, V87, P1554, DOI 10.1152/jn.00166.2001
Yamamoto K, 2007, J NEUROPHYSIOL, V97, P1588, DOI 10.1152/jn.00206.2006
NR 62
TC 29
Z9 30
U1 0
U2 3
PU ROYAL SOC
PI LONDON
PA 6-9 CARLTON HOUSE TERRACE, LONDON SW1Y 5AG, ENGLAND
SN 0962-8436
EI 1471-2970
J9 PHILOS T R SOC B
JI Philos. Trans. R. Soc. B-Biol. Sci.
PD JUN 27
PY 2008
VL 363
IS 1500
BP 2201
EP 2214
DI 10.1098/rstb.2008.2272
PG 14
WC Biology
SC Life Sciences & Biomedicine - Other Topics
GA 304WV
UT WOS:000256140700015
PM 18375374
OA gold
DA 2018-01-22
ER
PT J
AU Kawato, M
AF Kawato, Mitsuo
TI Brain controlled robots
SO HFSP JOURNAL
LA English
DT Article
ID DIRECT CORTICAL CONTROL; HUMANOID ROBOTS; BIPED LOCOMOTION; ARM
MOVEMENTS; MOTOR CORTEX; NEUROSCIENCE; MODEL
AB In January 2008, Duke University and the Japan Science and Technology Agency
(JST) publicized their successful control of a brain-machine interface for a
humanoid robot by a monkey brain across the Pacific Ocean. The activities of a few
hundred neurons were recorded from a monkey's motor cortex in Miguel Nicolelis's
lab at Duke University, and the kinematic features of monkey locomotion on a
treadmill were decoded from neural firing rates in real time. The decoded
information was sent to a humanoid robot, CB-i, in ATR Computational Neuroscience
Laboratories located in Kyoto, Japan. This robot was developed by the JST
International Collaborative Research Project (ICORP) as the "Computational Brain
Project." CB-i's locomotion-like movement was video-recorded and projected on a
screen in front of the monkey. Although the bidirectional communication used a
conventional Internet connection, its delay was suppressed below one over several
seconds, partly due to a video-streaming technique, and this encouraged the
monkey's voluntary locomotion and influenced its brain activity. This commentary
introduces the background and future directions of the brain-controlled robot.
C1 [Kawato, Mitsuo] Japan Sci & Technol Agcy ICORP, Computat Brain Project, Seika,
Kyoto 6190288, Japan.
[Kawato, Mitsuo] ATR Computat Neurosci Labs, Seika, Kyoto 6190288, Japan.
RP Kawato, M (reprint author), Japan Sci & Technol Agcy ICORP, Computat Brain
Project, Hikaridai 2-2-2, Seika, Kyoto 6190288, Japan.
EM kawato@atr.jp
RI Ramli, Roziana/E-7157-2010
CR Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
Bentivegna DC, 2004, ROBOT AUTON SYST, V47, P163, DOI
10.1016/j.robot.2004.03.010
BENTIVEGNA DC, 2004, INT J HUM ROBOT, V1, P585, DOI DOI
10.1142/S0219843604000307
Cheng G., 2007, SOC NEUR 37 ANN M SA
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
ENDO G, 2005, 20 NAT C ART INT AAA, P1237
FETZ EE, 1969, SCIENCE, V163, P955, DOI 10.1126/science.163.3870.955
GEORGOPOULOS AP, 1982, J NEUROSCI, V2, P1527
Hale JG, 2005, IEEE T SYST MAN CY C, V35, P512, DOI 10.1109/TSMCC.2004.840063
Ijspeert AJ, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS I-IV, PROCEEDINGS, P1398, DOI 10.1109/ROBOT.2002.1014739
Kamitani Y, 2006, CURR BIOL, V16, P1096, DOI 10.1016/j.cub.2006.04.003
Kamitani Y, 2005, NAT NEUROSCI, V8, P679, DOI 10.1038/nn1444
KAWATO M, 2008, PHILOS T B IN PRESS
Kawato M, 2007, CURR OPIN NEUROBIOL, V17, P205, DOI 10.1016/j.conb.2007.03.004
Koike Y, 2006, NEUROSCI RES, V55, P146, DOI 10.1016/j.neures.2006.02.012
Matsubara T, 2006, ROBOT AUTON SYST, V54, P911, DOI 10.1016/j.robot.2006.05.012
MIYAMOTO H, 1988, NEURAL NETWORKS, V1, P251, DOI 10.1016/0893-6080(88)90030-5
Miyamoto H, 1996, NEURAL NETWORKS, V9, P1281, DOI 10.1016/S0893-6080(96)00043-3
Morimoto J, 2006, IEEE INT CONF ROBOT, P1579, DOI 10.1109/ROBOT.2006.1641932
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Nicolelis MAL, 2001, NATURE, V409, P403, DOI 10.1038/35053191
Pollard NS, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1390, DOI 10.1109/ROBOT.2002.1014737
RILEY M, 2000, P 2000 WORKSH INT RO, P35
Sato M, 2004, NEUROIMAGE, V23, P806, DOI 10.1016/j.neuroimage.2004.06.037
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Schaal S, 1998, NEURAL COMPUT, V10, P2047, DOI 10.1162/089976698300016963
SCHAAL S, 2003, IEEE RSJ INT C INT R, P39
Shibata T, 2005, NEURAL NETWORKS, V18, P213, DOI 10.1016/j.neunet.2005.01.001
Shibata T, 2001, NEURAL NETWORKS, V14, P201, DOI 10.1016/S0893-6080(00)00084-8
Taylor DM, 2002, SCIENCE, V296, P1829, DOI 10.1126/science.1070291
TODA A, 2007, P ICONIP 2007
Todorov E, 2000, NAT NEUROSCI, V3, P391, DOI 10.1038/73964
Ude A, 2004, ROBOT AUTON SYST, V47, P93, DOI 10.1016/j.robot.2004.03.004
Ude A, 2003, ADV ROBOTICS, V17, P165, DOI 10.1163/156855303321165114
NR 34
TC 5
Z9 6
U1 0
U2 8
PU HFSP PUBLISHING
PI STRASBOURG
PA 12 QUAI ST JEAN, STRASBOURG, 67000, FRANCE
SN 1955-2068
EI 1955-205X
J9 HFSP J
JI HFSP J.
PD JUN
PY 2008
VL 2
IS 3
BP 136
EP 142
DI 10.2976/1.2931144
PG 7
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA 336MG
UT WOS:000258367400005
PM 19404467
OA gold
DA 2018-01-22
ER

PT J
AU Kanda, T
Miyashita, T
Osada, T
Haikawa, Y
Ishiguro, H
AF Kanda, Takayuki
Miyashita, Takahiro
Osada, Taku
Haikawa, Yuji
Ishiguro, Hiroshi
TI Analysis of Humanoid appearances in human-robot interaction
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE body movement analysis; humanoid robots; human-robot interaction (HRI);
robot appearance
ID FIELD TRIAL
AB Identifying the extent to which the appearance of a humanoid robot affects human
behavior toward it is important. We compared participant impressions of and
behaviors toward two real humanoid robots in simple human-robot interactions. These
two robots, which have different appearances but are controlled to perform the same
recorded utterances and motions, are adjusted by a motion-capturing system. We
conducted an experiment with 48 human participants who individually interacted with
the two robots and also with a human for reference. The results revealed that
different appearances did not affect participant verbal behaviors, but they did
affect such nonverbal behaviors as distance and delay of response. These
differences are explained by two factors: impressions and attributions.
C1 [Kanda, Takayuki; Miyashita, Takahiro; Ishiguro, Hiroshi] Adv Telecommun Res
Inst Int, Intelligent Robot & Commun Labs, Kyoto 6190288, Japan.
[Osada, Taku; Haikawa, Yuji] Honda R&D Corp Ltd, Fundamental Technol Res Ctr,
Saitama 3510114, Japan.
[Ishiguro, Hiroshi] Osaka Univ, Suita, Osaka 5650871, Japan.
RP Kanda, T (reprint author), Adv Telecommun Res Inst Int, Intelligent Robot &
Commun Labs, Kyoto 6190288, Japan.
EM kanda@atr.jp; miyasita@atr.jp; Taku_Osada@n.w.rd.honda.co.jp;
Yuji_Haikawa@n.w.rd.honda.co.jp; ishiguro@ams.eng.osaka-u.ac.jp
RI Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825
CR BREAZEAL C, 1951, P INT JOINT C ART IN, P1146
CASSELL J, P C HUM FACT COMP SY, P520
Flyvbjerg B, 2006, QUAL INQ, V12, P219, DOI 10.1177/1077800405284363
FORLIZZI J, P ACM IEEE INT C HUM, P129
FRIEDMAN B, P SIGCHI C HUM FACT, P273
Glaser B.G., 1967, DISCOVERY GROUNDED T
GOETZ J, IEEE WORKSH ROB HUM
HAIR JF, 1998, MULTIVARIATE DATA AN
Hall E. T., 1990, HIDDEN DIMENSION
HIRAI K, P IEEE INT C ROB AUT, P1321
KAMASIMA M, P IEEE RSJ INT C INT, P2506
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kanda T, 2004, HUM-COMPUT INTERACT, V19, P61, DOI 10.1207/s15327051hci1901&2_4
KANDA T, P IEEE INT C ROB AUT, P4166
Kanda T, 2007, IEEE T ROBOT, V23, P962, DOI 10.1109/TRO.2007.904904
Kaplan SR, 2004, HEART RHYTHM, V1, P3, DOI 10.1016/j.hrthm.2004.01.001
KIDD C, IEEE RSJ INT C INT R
NAKADAI K, 1911, P INT JOINT C ART IN, P1425
Osgood C., 1957, MEASUREMENT MEANING
Reeves B., 1996, MEDIA EQUATION
ROBINS B, P IEEE INT WORKSH RO, P557
ROBINS B, P IEEE INT WORKSH RO, P277
Robson C., 2002, REAL WORLD RES
SAKAGAMI Y, 1983, P IEEE RSJ INT C INT, P2478
Stake R., 1995, ART CASE STUDY RES
SUENAGA T, 2003, RES METHOD SOCIAL SY
*WIK, HOM APPL WIK
WOOD S, P IEEE INT WORKSH RO, P47
NR 28
TC 50
Z9 50
U1 0
U2 4
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855 USA
SN 1552-3098
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD JUN
PY 2008
VL 24
IS 3
BP 725
EP 735
DI 10.1109/TRO.2008.921566
PG 11
WC Robotics
SC Robotics
GA 316RH
UT WOS:000256966900018
DA 2018-01-22
ER

PT J
AU Ishi, CT
Matsuda, S
Kanda, T
Jitsuhiro, T
Ishiguro, H
Nakamura, S
Hagita, N
AF Ishi, Carlos Toshinori
Matsuda, Shigeki
Kanda, Takayuki
Jitsuhiro, Takatoshi
Ishiguro, Hiroshi
Nakamura, Satoshi
Hagita, Norihiro
TI A robust speech recognition system for communication robots in noisy
environments
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE acoustic noise; children speech; communication robots; robustness;
speech recognition
AB The application range of communication robots could be widely expanded by the
use of automatic speech recognition (ASR) systems with improved robustness for
noise and for speakers of different ages. In past researches, several modules have
been proposed and evaluated for improving the robustness of ASR systems in noisy
environments. However, this performance might be degraded when applied to robots,
due to problems caused by distant speech and the robot's own noise. In this paper,
we implemented the individual modules in a humanoid robot, and evaluated the ASR
performance in a real-world noisy environment for adults' and children's speech.
The performance of each module was verified by adding different levels of real
environment noise recorded in a cafeteria. Experimental results indicated that our
ASR systern could achieve over 80% word accuracy in 70-dBA noise. Further
evaluation of adult speech recorded in a real noisy environment resulted in 73%
word accuracy.
C1 [Ishi, Carlos Toshinori; Kanda, Takayuki; Ishiguro, Hiroshi; Hagita, Norihiro]
Adv Telecommun Res Inst Int, Intelligent Robot & Commun Labs, Kyoto 6190288, Japan.
[Matsuda, Shigeki; Nakamura, Satoshi] Natl Inst Informat & Commun Technol,
Koganei, Tokyo 1848795, Japan.
[Matsuda, Shigeki; Jitsuhiro, Takatoshi; Nakamura, Satoshi] Adv Telecommun Res
Inst Int, Spoken Language Commun Res Labs, Kyoto 6190288, Japan.
[Ishiguro, Hiroshi] Osaka Univ, Dept Adapt Machine Syst, Suita, Osaka 5650871,
Japan.
RP Ishi, CT (reprint author), Adv Telecommun Res Inst Int, Intelligent Robot &
Commun Labs, Kyoto 6190288, Japan.
EM carlos@atr.jp; shigeki.matsuda@atr.jp; kanda@atr.jp;
takatoshi.jitsuhiro@atr.jp; ishiguro@ams.eng.osaka-u.ac.jp;
satoshi.nakamura@atr.jp; hagita@atr.jp
RI Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825
CR ASOH H, 1997, P INT JOINT C ART IN, P880
BREAZEAL C, P INT JOINT C ART IN, P1146
HAYKIN S, 2000, BLIND SOURCE SEPARAT, V1
Herbordt W, 2005, P ASRU2005, P302
HERBORDT W, 2005, P IEEE EURASIP WORKS, P175
ISHI CT, P IEEE RSJ INT C INT, P374
Jitsuhiro T, 2004, IEICE T INF SYST, VE87D, P2121
Kanda T, 2004, HUM-COMPUT INTERACT, V19, P61, DOI 10.1207/s15327051hci1901&2_4
Li WF, 2005, IEICE T FUND ELECTR, VE88A, P1716, DOI 10.1093/ietfec/e88-a.7.1716
Matsuda S, 2006, IEICE T INF SYST, VE89D, P989, DOI 10.1093/ietisy/e89-d.3.989
NAKADAI K, P IEEE RSJ INT C INT, P1147
Nakadai K., 2001, P INT JOINT C ART IN, P1425
OHASHI Y, P IEEE RSJ INT C INT, P533
OMOLOGO M, 2001, MICROPHONE ARRAYS SI, P339
SHIOMI M, P 1 ANN C HUM ROB IN, P305
SOONG FK, P SWIM2004, P13
SUGIYAMA O, 2005, P IEEE RSJ INT C INT, P2140
TAKATANI T, P IEEE RSJ INT C INT, P215
TAKEZAWA T, 1998, P 1 INT WORKSH E AS, P148
YAMAMOTO S, P IEEE RSJ INT C INT, P897
NR 20
TC 13
Z9 13
U1 0
U2 4
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855 USA
SN 1552-3098
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD JUN
PY 2008
VL 24
IS 3
BP 759
EP 763
DI 10.1109/TRO.2008.919305
PG 5
WC Robotics
SC Robotics
GA 316RH
UT WOS:000256966900023
DA 2018-01-22
ER

PT J
AU Hoshino, K
AF Hoshino, Kiyoshi
TI Control of speed and power in a humanoid robot arm using pneumatic
actuators for human-robot coexisting environment
SO IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS
LA English
DT Article
DE pneumatic actuator; humanoid robot arm; simultaneous control of multiple
joints; I-PD control; damping control
ID HAND-II
AB A new type of humanoid robot arm which can coexist and be interactive with human
beings are looked for. For the purpose of implementation of human smooth and fast
movement to a pneumatic robot, the author used a humanoid robot arm with pneumatic
agonist-antagonist actuators as endoskeletons which has control mechanism in the
stiffness of each joint, and the controllability was experimentally discussed.
Using Kitamori's method to experimentally decide the control gains and using I-PD
controller, three joints of the humanoid robot arm were experimentally controlled.
The damping control algorithm was also adopted to the wrist joint, to modify the
speed in accordance with the power. The results showed that the controllability to
step-wise input was less than one degree in error to follow the target angles, and
the time constant was less than one second. The simultaneous input of command to
three joints was brought about the overshoot of about ten percent increase in
error. The humanoid robot arm can generate the calligraphic motions, moving quickly
at some times but slowly at other times, or particularly softly on some occasions
but stiffly on other occasions at high accuracy.
C1 Univ Tsukuba, Grad Sch Syst & Informat Engn, Tsukuba, Ibaraki 3058573, Japan.
RP Hoshino, K (reprint author), Univ Tsukuba, Grad Sch Syst & Informat Engn,
Tsukuba, Ibaraki 3058573, Japan.
EM hoshino@esys.tsukuba.ac.jp
CR Butterfass J, 2004, TSI PRESS S, V15, P105
HIRAI S, 1996, P IEEE RSJ INT C INT, V2, P763
HIRAI S, 1993, P 1993 ICRA, V2, P87
HOSHINO K, 2005, J ROBOTICS MECHATRON, V17, P655
Hoshino K, 2006, IEICE T FUND ELECTR, VE89A, P3290, DOI 10.1093/ietfec/e89-
a.11.3290
JACOBSEN SC, 1984, INT J ROBOT RES, V3, P21, DOI 10.1177/027836498400300402
KATAYAMA M, 1991, ADV NEURAL INFORM PR, P436
Kawasaki H, 2002, IEEE-ASME T MECH, V7, P296, DOI 10.1109/TMECH.2002.802720
Kyriakopoulos KJ, 1997, IEEE T SYST MAN CY B, V27, P95, DOI 10.1109/3477.552188
Lovchik CS, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P907, DOI 10.1109/ROBOT.1999.772420
Sasaki D., 2003, J ROBOTICS MECHATRON, V15, P164
Shin S., 1990, Adaptive Systems in Control and Signal Processing 1989. Selected
Papers from the 3rd IFAC Symposium, P101
SHIN S, 1992, P WORKSH ROB CONTR, P179
Shiokata D., 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots
and Systems, P2097
NR 14
TC 3
Z9 3
U1 0
U2 3
PU IEICE-INST ELECTRONICS INFORMATION COMMUNICATIONS ENG
PI TOKYO
PA KIKAI-SHINKO-KAIKAN BLDG, 3-5-8, SHIBA-KOEN, MINATO-KU, TOKYO, 105-0011,
JAPAN
SN 1745-1361
J9 IEICE T INF SYST
JI IEICE Trans. Inf. Syst.
PD JUN
PY 2008
VL E91D
IS 6
BP 1693
EP 1699
DI 10.1093/ietisy/e91-d.6.1693
PG 7
WC Computer Science, Information Systems; Computer Science, Software
Engineering
SC Computer Science
GA 315EP
UT WOS:000256861100014
OA gold
DA 2018-01-22
ER

PT J
AU Yussof, H
Ohka, M
Yamano, M
Nasu, Y
AF Yussof, Hanafiah
Ohka, Masahiro
Yamano, Mitsuhiro
Nasu, Yasuo
TI Navigation Strategy by Contact Sensing Interaction for a Biped Humanoid
Robot
SO INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
LA English
DT Article
DE Navigation strategy; humanoid robot; contact sensing; biped locomotion;
trajectory generation
AB This report presents a basic contact interaction-based navigation strategy for a
biped humanoid robot to support current visual-based navigation. The robot's arms
were equipped with force sensors to detect physical contact with objects. We
proposed a motion algorithm consisting of searching tasks, self-localization tasks,
correction of locomotion direction tasks and obstacle avoidance tasks. Priority was
given to right-side direction to navigate the robot locomotion. Analysis of
trajectory generation, biped gait pattern, and biped walking characteristics was
performed to define an efficient navigation strategy in a biped walking humanoid
robot. The proposed algorithm is evaluated in an experiment with a 21-dofs humanoid
robot operating in a room with walls and obstacles. The experimental results reveal
good robot performance when recognizing objects by touching, grasping, and
continuously generating suitable trajectories to correct direction and avoid
collisions.
C1 [Yussof, Hanafiah; Ohka, Masahiro] Nagoya Univ, Grad Sch Informat Sci, Nagoya,
Aichi 4648601, Japan.
[Yamano, Mitsuhiro; Nasu, Yasuo] Yamagata Univ, Fac Engn, Yamagata 990, Japan.
RP Yussof, H (reprint author), Nagoya Univ, Grad Sch Informat Sci, Nagoya, Aichi
4648601, Japan.
RI Yussof, Hanafiah/D-6413-2012
FU Japan Ministry of Education, Culture, Sports, Science and Technology
[18656079]
FX Part of this work was supported by fiscal 2006 grants from the Japan
Ministry of Education, Culture, Sports, Science and Technology
(Grant-in-Aid for Scientific Research in Exploratory Research, No.
18656079).
CR Denavit J., 1995, J APPL MECH, V77, P215
Diaz J. F., 2001, Proceedings of the Fourteenth International Florida Artificial
Intelligence Research Society Conference, P145
GUTMANN JS, 2005, P INT JOINT C ART IN, P1232
Hamzei GHS, 1999, ROBOTICA, V17, P325
HANAFIAH Y, 2005, J ADV ROBOTIC SYSTEM, V2, P251
HANAFIAH Y, 2006, J SIMULATION SYSTEM, V7, P55
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hirukawa H, 2004, ROBOT AUTON SYST, V48, P165, DOI 10.1016/j.robot.2004.07.007
Kim J., 2004, P INT C AUT ROB AG, P34
KYRIAKOPOULOS KJ, 1993, J AUTOMATICA, V29, P309
NASU Y, 2003, J COMPUTATIONAL METH, P177
OGATA T, 2005, J JAPANESE SOC ARTIF, V20, P188
Okada K, 2003, PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR
FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS, P131, DOI 10.1109/MFI-
2003.2003.1232645
Remazeilles A, 2007, ROBOT AUTON SYST, V55, P345, DOI
10.1016/j.robot.2006.10.002
Sagues C, 1999, ROBOTICA, V17, P355, DOI 10.1017/S0263574799001605
Seara JF, 2004, ROBOT AUTON SYST, V48, P231, DOI [10.1016/j.robot.2004.07.003,
10.1016/j.robots.2004.07.003]
Thompson S., 2006, P 3 INT C AUT ROB AG, P1
Tu KY, 2006, ROBOT AUTON SYST, V54, P574, DOI 10.1016/j.robot.2006.04.001
VUKOBRATOVIC M, 2005, J HUMANOIDS ROBOTICS, V2, P361
NR 19
TC 1
Z9 1
U1 0
U2 0
PU INTECH EUROPE
PI RIJEKA
PA JANEZA TRDINE 9, RIJEKA, 51000, CROATIA
SN 1729-8806
EI 1729-8814
J9 INT J ADV ROBOT SYST
JI Int. J. Adv. Robot. Syst.
PD JUN
PY 2008
VL 5
IS 2
BP 151
EP 160
PG 10
WC Robotics
SC Robotics
GA 417FU
UT WOS:000264062700004
DA 2018-01-22
ER

PT J
AU Dillmann, R
Asfour, T
Cheng, G
Ude, A
AF Dillmann, Ruediger
Asfour, Tamim
Cheng, Gordon
Ude, Ales
TI TOWARD COGNITIVE HUMANOID ROBOTS
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Editorial Material
C1 [Dillmann, Ruediger; Asfour, Tamim] Univ Karlsruhe, Karlsruhe, Germany.
[Cheng, Gordon] ATR, Kyoto, Japan.
[Ude, Ales] Jozef Stefan Inst, Ljubljana, Slovenia.
RP Dillmann, R (reprint author), Univ Karlsruhe, Karlsruhe, Germany.
OI Cheng, Gordon/0000-0003-0770-8717
NR 0
TC 0
Z9 0
U1 0
U2 2
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD JUN
PY 2008
VL 5
IS 2
BP 157
EP 159
PG 3
WC Robotics
SC Robotics
GA 366NH
UT WOS:000260488500001
DA 2018-01-22
ER

PT J
AU Ude, A
Omrcen, D
Cheng, G
AF Ude, Ales
Omrcen, Damir
Cheng, Gordon
TI MAKING OBJECT LEARNING AND RECOGNITION AN ACTIVE PROCESS
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Cognitive behavior; active vision; humanoid vision; object recognition;
object learning
ID FACE RECOGNITION; TRACKING
AB The exploration and learning of new objects is an essential capability of a
cognitive robot. In this paper we focus on making use of the robot's manipulation
abilities to learn complete object representations suitable for 3D object
recognition. Taking control of the object allows the robot to focus on relevant
parts of the images, thus bypassing potential pitfalls of purely bottom-up
attention and segmentation. The main contribution of the paper consists in
integrated visuomotor processes that allow the robot to learn object
representations by manipulation without having any prior knowledge about the
objects. Our experimental results show that the acquired data is of sufficient
quality to train a classifier that can recognize 3D objects independently of the
viewpoint.
C1 [Ude, Ales; Omrcen, Damir] Jozef Stefan Inst, Dept Automat Biocybernet & Robot,
Ljubljana 1000, Slovenia.
[Ude, Ales; Cheng, Gordon] ATR Computat Neurosci Labs, Dept Humanoid Robot &
Computat Neurosci, Kyoto 6190288, Japan.
[Cheng, Gordon] Natl Inst Informat & Commun Technol, Knowledge Creating Commun
Res Ctr, Kyoto 6190288, Japan.
[Cheng, Gordon] Japan Sci & Technol Agcy, ICORP Computat Brain Project, Saitama,
Japan.
RP Ude, A (reprint author), Jozef Stefan Inst, Dept Automat Biocybernet & Robot,
Jamova 39, Ljubljana 1000, Slovenia.
EM aude@atr.jp; damir.omrcen@ijs.si; gordon@atr.jp
OI Cheng, Gordon/0000-0003-0770-8717; Ude, Ales/0000-0003-3677-3972
FU European Commission [FP6-2004-IST-4-027657]
FX The work described in this paper was partially conducted within the EU
Cognitive Systems project PACO-PLUS (FP6-2004-IST-4-027657), funded by
the European Commission.
CR Comaniciu D, 2003, IEEE T PATTERN ANAL, V25, P564, DOI
10.1109/TPAMI.2003.1195991
Crammer K., 2001, J MACHINE LEARNING R, V2, P265
Fitzpatrick P, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P2161
Fitzpatrick P, 2003, PHILOS T R SOC A, V361, P2165, DOI 10.1098/rsta.2003.1251
Hayman E, 2003, NINTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS I
AND II, PROCEEDINGS, P67
Hutchinson S, 1996, IEEE T ROBOTIC AUTOM, V12, P651, DOI 10.1109/70.538972
Joachims T., 1999, ADV KERNEL METHODS S
Lowe DG, 2001, PROC CVPR IEEE, P682
Marchand E, 1996, IROS 96 - PROCEEDINGS OF THE 1996 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS - ROBOTIC INTELLIGENCE INTERACTING
WITH DYNAMIC WORLDS, VOLS 1-3, P1083, DOI 10.1109/IROS.1996.568954
McKenna SJ, 1999, IMAGE VISION COMPUT, V17, P225, DOI 10.1016/S0262-
8856(98)00104-8
NELSON BJ, 1995, INT J ROBOT RES, V14, P255, DOI 10.1177/027836499501400304
POGGIO T, 1990, NATURE, V343, P263, DOI 10.1038/343263a0
Schiele B, 2000, INT J COMPUT VISION, V36, P31, DOI 10.1023/A:1008120406972
Tu ZW, 2005, INT J COMPUT VISION, V63, P113, DOI 10.1007/s11263-005-6642-x
TURK M, 1991, J COGNITIVE NEUROSCI, V3, P71, DOI 10.1162/jocn.1991.3.1.71
Ude A, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P2173
Wallraven C, 2005, NETWORK-COMP NEURAL, V16, P401, DOI 10.1080/09548980500508844
Wiskott L, 1997, IEEE T PATTERN ANAL, V19, P775, DOI 10.1109/34.598235
Yoshikawa T, 1996, LAB ROBOTICS AUTOMAT, V8, P49
NR 19
TC 22
Z9 22
U1 0
U2 2
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD JUN
PY 2008
VL 5
IS 2
BP 267
EP 286
DI 10.1142/S0219843608001406
PG 20
WC Robotics
SC Robotics
GA 366NH
UT WOS:000260488500007
DA 2018-01-22
ER

PT J
AU Stasse, O
Said, F
Yokoi, K
Verrelst, B
Vanderborght, B
Davison, A
Mansard, N
Esteves, C
AF Stasse, Olivier
Said, Francois
Yokoi, Kazuhito
Verrelst, Bjoern
Vanderborght, Bram
Davison, Andrew
Mansard, Nicolas
Esteves, Claudia
TI INTEGRATING WALKING AND VISION TO INCREASE HUMANOID AUTONOMY
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE Appropriate models; integration; ZMP preview control; SLAM; planning;
visual servoing
ID ROBOTS
AB Aiming at building versatile humanoid systems, we present in this paper the
real-time implementation of behaviors which integrate walking and vision to achieve
general functionalities. The paper describes how real-time - or high-bandwidth -
cognitive processes can be obtained by combining vision with walking. The central
point of our methodology is to use appropriate models to reduce the complexity of
the search space. We will describe the models introduced in the different blocks of
the system and their relationships: walking pattern, self-localization and map
building, real-time reactive vision behaviors, and planning.
C1 [Stasse, Olivier; Said, Francois; Yokoi, Kazuhito] AIST CNRS, ISRI, JRL,
Tsukuba, Ibaraki, Japan.
[Verrelst, Bjoern; Vanderborght, Bram] Vrije Univ Brussels, Brussels, Belgium.
[Davison, Andrew] Univ London Imperial Coll Sci Technol & Med, London SW7 2AZ,
England.
[Esteves, Claudia] CNRS AIST, JRL, LAAS, Toulouse, France.
[Mansard, Nicolas] IRISA INRIA Rennes, Rennes, France.
[Yokoi, Kazuhito] Univ Tsukuba, Tsukuba, Ibaraki 305, Japan.
RP Stasse, O (reprint author), AIST CNRS, ISRI, JRL, Tsukuba, Ibaraki, Japan.
EM olivier.stasse@aist.go.jp; francois.saidi@aist.go.jp;
kazuhito.yokoi@aist.go.jp; bjorn.verrelst@vub.ac.be;
bram.vanderborght@vub.ac.be; adj@doc.ic.ac.uk; nmansard@gmail.com;
cesteves@cimat.mx
RI Yokoi, Kazuhito/K-2046-2012; Vanderborght, Bram/A-1599-2008; Stasse,
Olivier/E-6220-2010
OI Yokoi, Kazuhito/0000-0003-3942-2027; Vanderborght,
Bram/0000-0003-4881-9341;
FU ROBOT@CWE EU CEC [34002]
FX The authors would like to thank the Japanese Society for the Promotion
of Science for partial funding of this work. The first author is
partially supported by grants from the ROBOT@CWE EU CEC project,
Contract No. 34002, under the 6th Research Program www.robot-at-cwe.eu.
CR BENTIVEGNA DC, 2004, INT J HUM ROBOT, V1, P585, DOI DOI
10.1142/S0219843604000307
Bolder B, 2007, IEEE INT CONF ROBOT, P3054, DOI 10.1109/ROBOT.2007.363936
DAVISON AJ, SCENE
DAVISON AJ, 2007, IEEE T PATTERN ANAL, V6, P1052
Fitzpatrick P, 2003, IEEE INT CONF ROBOT, P3140
GRAVOT F, 2006, IEEE RAS INT C ROB A, P462
*INT, OP COMP VIS
Kagami S, 2003, IEEE INT CONF ROBOT, P2141, DOI 10.1109/ROBOT.2003.1241910
Kajita S, 2003, IEEE INT CONF ROBOT, P1620, DOI 10.1109/ROBOT.2003.1241826
Kajita S., 2005, HUMANOID ROBOT
KANEKO K, 2006, INT J HUMAN ROBOTICS, V3, P1
Kavraki LE, 1996, IEEE T ROBOTIC AUTOM, V12, P566, DOI 10.1109/70.508439
Kuffner JJ, 2002, AUTON ROBOT, V12, P105, DOI 10.1023/A:1013219111657
Laumond JP, 2006, IEEE ROBOT AUTOM MAG, V13, P90, DOI 10.1109/MRA.2006.1638020
LaValle S. M., 2000, P INT WORKSH ALG FDN
LORCH O, 2002, INT C INT ROB SYST I, P2484
Mansard N, 2007, IEEE INT CONF ROBOT, P3041, DOI 10.1109/ROBOT.2007.363934
Mansard N, 2007, IEEE T ROBOT, V23, P60, DOI 10.1109/TRO.2006.889487
Neo ES, 2005, IEEE-ASME T MECH, V10, P546, DOI 10.1109/TMECH.2005.856112
NISHIWAKI K, 2006, INT S EXP ROB ISER J, P156
Nishiwaki K, 2007, PHILOS T R SOC A, V365, P79, DOI 10.1098/rsta.2006.1921
Okada K, 2004, IEEE INT CONF ROBOT, P3207, DOI 10.1109/ROBOT.2004.1308748
OKADA K, 2006, IEEE RAS INT C HUM R, P7
Saidi F., 2006, IEEE RAS INT C HUM R, p346
Stasse O., 2006, IEEE RSJ INT C INT R, P348
Stasse O., 2006, IEEE RSJ INT C INT R, P2955
STASSE O, 2006, INT C INT AUT SYST I, P794
SUGIHARA T, 2002, IEEE RSJ INT C INT R, P2575
SUMI Y, 2002, INT J COMPUT VISION, V6, P5
TERADA K, 2007, P 12 ROB S, P196
Ude A, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P2173
UDE A, 2005, DISTRIBUTED VISUAL A, P381
Yokoi K., 2006, IEEE RAS INT C HUM R, P117
NR 33
TC 12
Z9 12
U1 0
U2 2
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD JUN
PY 2008
VL 5
IS 2
BP 287
EP 310
DI 10.1142/S021984360800142X

PG 24
WC Robotics
SC Robotics
GA 366NH
UT WOS:000260488500008
DA 2018-01-22
ER
PT J
AU Peters, J
Schaal, S
AF Peters, Jan
Schaal, Stefan
TI Reinforcement learning of motor skills with policy gradients
SO NEURAL NETWORKS
LA English
DT Article
DE reinforcement learning; policy gradient methods; natural gradients;
Natural Actor-Critic; motor skills; motor primitives
ID REACHING MOVEMENTS; INFINITE-HORIZON; BIPED LOCOMOTION; APPROXIMATION;
OPTIMIZATION; ESTIMATION/; SIMULATION; ALGORITHM; ARM
AB Autonomous learning is one of the hallmarks of human and animal behavior, and
understanding the principles of learning will be crucial in order to achieve true
autonomy in advanced machines like humanoid robots. In this paper, we examine
learning of complex motor skills with human-like limbs. While supervised learning
can offer useful tools for bootstrapping behavior, e.g., by learning from
demonstration, it is only reinforcement learning that offers a general approach to
the final trial-and-error improvement that is needed by each individual acquiring a
skill. Neither neurobiological nor machine learning studies have, so far, offered
compelling results on how reinforcement learning can be scaled to the high-
dimensional continuous state and action spaces of humans or humanoids. Here, we
combine two recent research developments on learning motor control in order to
achieve this scaling. First, we interpret the idea of modular motor control by
means of motor primitives as a suitable way to generate parameterized control
policies for reinforcement learning. Second, we combine motor primitives with the
theory of stochastic policy gradient learning, which currently seems to be the only
feasible framework for reinforcement learning for humanoids. We evaluate different
policy gradient methods with a focus on their applicability to parameterized motor
primitives. We compare these algorithms in the context of motor primitive learning,
and show that our most modern algorithm, the Episodic Natural Actor-Critic
outperforms previous algorithms by at least an order of magnitude. We demonstrate
the efficiency of this reinforcement learning method in the application of learning
to hit a baseball with an anthropomorphic robot arm. (C) 2008 Elsevier Ltd. All
rights reserved.
C1 [Peters, Jan] Max Planck Inst Biol Cybernet, D-72076 Tubingen, Germany.
[Peters, Jan; Schaal, Stefan] Univ So Calif, Los Angeles, CA 90089 USA.
[Schaal, Stefan] ATR Computat Neurosci Lab, Kyoto 6190288, Japan.
RP Peters, J (reprint author), Max Planck Inst Biol Cybernet, Spemannstr 38, D-
72076 Tubingen, Germany.
EM jan.peters@tuebingen.mpg.de
RI Peters, Jan/D-5068-2009
OI Peters, Jan/0000-0002-5266-8091
CR ABERDEEN D, 2006, MACH LEARN SUMM SCH
ALEKSANDROV VM, 1968, ENG CYBERN, V5, P11
Amari S, 1998, NEURAL COMPUT, V10, P251, DOI 10.1162/089976698300017746
ATKESON CG, 1994, ADV NEURAL INFORM PR, P503
Schneider J. B, 2003, P INT JOINT C ART IN, P1019
Baird L. C., 1993, WLTR931146
Balasubramanian V, 1997, NEURAL COMPUT, V9, P349, DOI 10.1162/neco.1997.9.2.349
Barto A. G., 1983, IEEE T SYST MAN CYB, VSMC-13, P115
Baxter J, 2001, J ARTIF INTELL RES, V15, P319
Baxter J, 2001, J ARTIF INTELL RES, V15, P351
Ben-Itzhak S, 2008, NEURAL COMPUT, V20, P779, DOI 10.1162/neco.2007.12-05-077
Benbrahim H, 1997, ROBOT AUTON SYST, V22, P283, DOI 10.1016/S0921-8890(97)00043-
2
BENBRAHIM H, 1992, P INT JOINT C NEUR N, P92
BERNY A, 2000, LECT NOTES NATURAL C, V33, P287
Dyer P., 1970, COMPUTATION THEORY O
ENDO G, 2005, P 20 NAT C ART INT 9, P1267
Flash T, 2005, CURR OPIN NEUROBIOL, V15, P660, DOI 10.1016/j.conb.2005.10.011
Fletcher R., 2000, PRACTICAL METHODS OP
Fu MC, 2002, INFORMS J COMPUT, V14, P192, DOI 10.1287/ijoc.14.3.192.113
Gibbs AL, 2002, INT STAT REV, V70, P419, DOI 10.2307/1403865
GLYNN PW, 1990, COMMUN ACM, V33, P75, DOI 10.1145/84537.84552
GLYNN PW, 1987, P 1987 WINT SIM C, P366
Greensmith E, 2004, J MACH LEARN RES, V5, P1471
GREENSMITH E, 2001, ADV NEURAL INFORM PR, V14
Guenter F, 2007, ADV ROBOTICS, V21, P1521
GULLAPALLI V, 1990, NEURAL NETWORKS, V3, P671, DOI 10.1016/0893-6080(90)90056-Q
GULLAPALLI V, 1994, IEEE CONTR SYST MAG, V14, P13, DOI 10.1109/37.257890
GULLAPALLI V, 1992, ADV NEURAL INFORM PR, P327
Harville D. A., 2000, MATRIX ALGEBRA STAT
HASDORFF L, 1976, GRADIENT OPTIMIZATIO
Ijspeert A. J., 2003, ADV NEURAL INFORM PR, V15, P1547
Ijspeert J.A., 2002, P IEEE INT C ROB AUT, P1398
Jacobson D. H., 1970, DIFFERENTIAL DYNAMIC
Kakade S, 2002, ADV NEUR IN, V14, P1531
KAKADE S, 2001, P C COMP LEARN THEOR, P605
Kakade S., 2003, THESIS U COLL LONDON
KIMURA H, 1998, P INT C INT AUT SYST, V5, P288
KIMURA H, 1997, P 6 EUR WORKSH LEARN, P144
Kleinman NL, 1999, MANAGE SCI, V45, P1570, DOI 10.1287/mnsc.45.11.1570
KOHL N, 2004, P IEEE INT C ROB AUT, P2619
KONDA V, 2000, ADV NEURAL INFORMATI, P12
Lawrence G, 2003, P INT C UNC ART INT, P354, DOI 10.1.1.68.5139
MITSUNAGA N, 2005, P 2005 IEEE RSJ INT, P1594
Miyamoto H., 1996, Progress in Neural Information Processing. Proceedings of the
International Conference on Neural Information Processing, P938
Miyamoto K, 1995, RO-MAN'95 TOKYO: 4TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND
HUMAN COMMUNICATION, PROCEEDINGS, P327, DOI 10.1109/ROMAN.1995.531981
Moon T. K., 2000, MATH METHODS ALGORIT
Mori T, 2004, PROCEEDING OF THE NINETEENTH NATIONAL CONFERENCE ON ARTIFICIAL
INTELLIGENCE AND THE SIXTEENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL
INTELLIGENCE, P623
MORI T, 2005, ADV NEURAL INFORM PR
Morimoto J., 2003, ADV NEURAL INFORM PR, V15, P1539
Nakamura Y, 2004, LECT NOTES COMPUT SC, V3242, P972
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Ng A. Y., 2000, P 16 C UNC ART INT, P406
Park J, 2005, LECT NOTES ARTIF INT, V3801, P65
Peters J, 2005, LECT NOTES ARTIF INT, V3720, P280
Peters J., 2005, P 16 EUR C MACH LEAR, P280
Peters J., 2007, THESIS U SO CALIFORN
PETERS J, 2005, CS05867 U SO CALIFOR
PETERS J, 2003, P IEEE RAS INT C HUM, P103
Peters J., 2006, P IEEE RSJ INT C INT, P2219
Pongas D., 2005, P IEEE INT C INT ROB, P2911
RICHTER S, 2007, ADV NEURAL INFORM PR, V19
Sadegh P, 1997, P AMER CONTR CONF, P3582, DOI 10.1109/ACC.1997.609490
Sato M, 2002, LECT NOTES COMPUT SC, V2415, P777
Schaal S, 1997, ADV NEUR IN, V9, P1040
SCHAAL S, 2004, SPRING TRACTS ADV RO, P561
SCIAVICCO L, 2007, MODELING CONTROL ROB
Spall JC, 2003, INTRO STOCHASTIC SEA
Sutton RS, 2000, ADV NEUR IN, V12, P1057
TEDRAKE R, 2005, P YAL WORKSH AD LEAR, P10
UENO T, 2006, P IEEE RSJ INT C INT, P5226
VACHENAUER P, 2000, TASCHENBUCH INGENIEU
Wada Y., 1994, SYST COMPUT JPN, V24, P37
WEAVER L, 2001, P INT C UNC ART INT, V17, P538
WEAVER L, 2001, 30 ANU
WERBOS P, 1979, POLICY ANAL INFORM S, V3
Williams R. J., 1992, MACHINE LEARNING, V8
NR 76
TC 178
Z9 182
U1 7
U2 41
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 0893-6080
EI 1879-2782
J9 NEURAL NETWORKS
JI Neural Netw.
PD MAY
PY 2008
VL 21
IS 4
BP 682
EP 697
DI 10.1016/j.neunet.2008.02.003
PG 16
WC Computer Science, Artificial Intelligence; Neurosciences
SC Computer Science; Neurosciences & Neurology
GA 326EA
UT WOS:000257639800012
PM 18482830
DA 2018-01-22
ER

PT J
AU Chaminade, T
Oztop, E
Cheng, G
Kawato, M
AF Chaminade, Thierry
Oztop, Ethan
Cheng, Gordon
Kawato, Mitsuo
TI From self-observation to imitation: Visuomotor association on a robotic
hand
SO BRAIN RESEARCH BULLETIN
LA English
DT Article
DE imitation; associative learning
ID SOCIAL COGNITION; MOTOR CONTROL; NEURAL NETWORKS; RECOGNITION; INFANTS;
CORTEX; AREA; FACE; PERSPECTIVE; ABILITIES
AB Being at the crux of human cognition and behaviour, imitation has become the
target of investigations ranging from experimental psychology and neurophysiology
to computational sciences and robotics. It is often assumed that the imitation is
innate, but it has more recently been argued, both theoretically and
experimentally, that basic forms of imitation could emerge as a result of self-
observation. Here, we tested this proposal on a realistic experimental platform,
comprising an associative network linking a 16 degrees of freedom robotic hand and
a simple visual system. We report that this minimal visuomotor association is
sufficient to bootstrap basic imitation. Our results indicate that crucial features
of human imitation, such as generalization to new actions, may emerge from a
connectionist associative network. Therefore, we suggest that a behaviour as
complex as imitation could be, at the neuronal level, founded on basic mechanisms
of associative learning, a notion supported by a recent proposal on the
developmental origin of mirror neurons. Our approach can be applied to the
development of realistic cognitive architectures for humanoid robots as well as to
shed new light on the cognitive processes at play in early human cognitive
development. (c) 2008 Elsevier Inc. All rights reserved.
C1 [Oztop, Ethan; Cheng, Gordon; Kawato, Mitsuo] JST ICORP Computat Brain Project,
Keihanna Sci City, Kyoto 6190288, Japan.
[Chaminade, Thierry; Oztop, Ethan; Cheng, Gordon; Kawato, Mitsuo] ATR Computat
Neurosci Lab, Keihanna Sci City, Kyoto 6190288, Japan.
RP Chaminade, T (reprint author), Inst Neurosci Cognit Mediterranee, 31 Chemin
Joseph Aiguier, F-13402 Marseille 20, France.
EM tchamina@gmail.com
RI Chaminade, Thierry/F-8367-2010; Oztop, Erhan/K-8111-2012
OI Chaminade, Thierry/0000-0003-4952-1467; Cheng,
Gordon/0000-0003-0770-8717
CR Andry P, 2001, IEEE T SYST MAN CY A, V31, P431, DOI 10.1109/3468.952717
Anisfeld M, 1996, DEV REV, V16, P149, DOI 10.1006/drev.1996.0006
Arbib MA, 2005, BEHAV BRAIN SCI, V28, P105, DOI 10.1017/S0140525X05000038
Arbib M. A., 2005, BEHAV BRAIN SCI, V28, P125, DOI DOI 10.1017/S0140525X05000038
Billard A, 2004, ROBOT AUTON SYST, V47, P69, DOI 10.1016/j.robot.2004.03.002
Billard A, 1999, ADAPT BEHAV, V7, P35, DOI 10.1177/105971239900700103
Blakemore SJ, 2005, NEUROPSYCHOLOGIA, V43, P260, DOI
10.1016/j.neuropsychologia.2004.11.012
Byrne RW, 1998, BEHAV BRAIN SCI, V21, P667, DOI 10.1017/S0140525X98001745
Chaminade T, 2001, NEUROREPORT, V12, P3669, DOI 10.1097/00001756-200112040-00013
Decety J, 2002, NEUROIMAGE, V15, P265, DOI 10.1006/nimg.2001.0938
Downing PE, 2001, SCIENCE, V293, P2470, DOI 10.1126/science.1063414
FREEDLAND RL, 1987, J EXP PSYCHOL HUMAN, V13, P566, DOI 10.1037/0096-
1523.13.4.566
Gliga T, 2005, J COGNITIVE NEUROSCI, V17, P1328, DOI 10.1162/0898929055002481
Haaland KY, 2000, BRAIN, V123, P2306, DOI 10.1093/brain/123.11.2306
Halsband U, 2001, NEUROPSYCHOLOGIA, V39, P200, DOI 10.1016/S0028-3932(00)00088-9
Heyes C, 2001, TRENDS COGN SCI, V5, P253, DOI 10.1016/S1364-6613(00)01661-2
HOPFIELD JJ, 1982, P NATL ACAD SCI-BIOL, V79, P2554, DOI 10.1073/pnas.79.8.2554
Jacob P, 2005, TRENDS COGN SCI, V9, P21, DOI 10.1016/j.tics.2004.11.003
Kanwisher N, 1997, J NEUROSCI, V17, P4302
KEELER JD, 1988, COGNITIVE SCI, V12, P299, DOI 10.1016/0364-0213(88)90026-2
Keysers C, 2004, TRENDS COGN SCI, V8, P501, DOI 10.1016/j.tics.2004.09.005
Kilner JM, 2003, CURR BIOL, V13, P522, DOI 10.1016/S0960-9822(03)00165-9
KUNIYOSHI Y, 2003, INT C ROB AUT IEEE T
LACOBONI M, 1999, SCIENCE, V286, P2526
Meltzoff A N, 1977, Science, V198, P74, DOI 10.1126/science.897687
Meltzoff AN, 2003, PHILOS T R SOC B, V358, P491, DOI 10.1098/rstb.2002.1261
Miall RC, 1996, NEURAL NETWORKS, V9, P1265, DOI 10.1016/S0893-6080(96)00035-4
Morita M, 2002, COGNITIVE BRAIN RES, V13, P169, DOI 10.1016/S0926-6410(01)00109-
4
Myowa-Yamakoshi M, 2004, DEVELOPMENTAL SCI, V7, P437, DOI 10.1111/j.1467-
7687.2004.00364.x
Nadel J., 2004, INTERACTION STUDIES, V1, P45, DOI DOI 10.1075/IS.5.1.04NAD
OGINO M, 2005, P SOC MECH ROB PROGR
Oztop E, 2005, COGNITIVE BRAIN RES, V22, P129, DOI
10.1016/j.cogbrainres.2004.08.004
OZTOP E, 1996, THESIS MIDDLE E TU A
Oztop E., 2005, INT J HUM ROBOT, V2, P537, DOI DOI 10.1142/S0219843605000582
Petrides M, 2005, NATURE, V435, P1235, DOI 10.1038/nature03628
Rizzolatti G, 1998, TRENDS NEUROSCI, V21, P188, DOI 10.1016/S0166-2236(98)01260-
0
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Slater A, 1998, EXP BRAIN RES, V123, P90, DOI 10.1007/s002210050548
Tomasello M., 2001, CULTURAL ORIGINS HUM
Tzourio-Mazoyer N, 2002, NEUROIMAGE, V15, P454, DOI 10.1006/nimg.2001.0979
VANDERMEER ALH, 1995, SCIENCE, V267, P693, DOI 10.1126/science.7839147
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
Wolpert DM, 2003, PHILOS T R SOC B, V358, P593, DOI 10.1098/rstb.2002.1238
Wolpert DM, 2001, TRENDS COGN SCI, V5, P487, DOI 10.1016/S1364-6613(00)01773-3
NR 45
TC 21
Z9 21
U1 0
U2 6
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 0361-9230
J9 BRAIN RES BULL
JI Brain Res. Bull.
PD APR 15
PY 2008
VL 75
IS 6
BP 775
EP 784
DI 10.1016/j.brainresbull.2008.01.016
PG 10
WC Neurosciences
SC Neurosciences & Neurology
GA 297MT
UT WOS:000255621500009
PM 18394524
DA 2018-01-22
ER

PT J
AU Nomura, T
Kanda, T
Suzuki, T
Kato, K
AF Nomura, Tatsuya
Kanda, Takayuki
Suzuki, Tomohiro
Kato, Kensuke
TI Prediction of human behavior in human-robot interaction using
psychological scales for anxiety and negative attitudes toward robots
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE anxiety and negative attitudes; human-robot interaction; relationships
between emotions and behavior
AB When people interact with communication robots in daily life, their attitudes
and emotions toward the robots affect their behavior. From the perspective of
robotics design, we need to investigate the influences of these attitudes and
emotions on human-robot interaction. This paper reports our empirical study on the
relationships between people's attitudes and emotions, and their behavior toward a
robot. In particular, we focused on negative attitudes, anxiety, and communication
avoidance behavior, which have important implications for robotics design. For this
purpose, we used two psychological scales that we had developed: negative attitudes
toward robots scale (NARS) and robot anxiety scale (RAS). In the experiment,
subjects and a humanoid robot are engaged in simple interactions including scenes
of meeting, greeting, self-disclosure, and physical contact. Experimental results
indicated that there is a relationship between negative attitudes and emotions, and
communication avoidance behavior. A gender effect was also suggested.
C1 [Nomura, Tatsuya] Ryukoku Univ, Dept Media Informat, Otsu, Shiga 5202194, Japan.
[Kanda, Takayuki] ATR, Intelligent Robot & Commun Labs, Kyoto 6190288, Japan.
[Suzuki, Tomohiro] Toyo Univ, Grad Sch Sociol, Tokyo 1128606, Japan.
[Kato, Kensuke] Kyushu Univ Hlth & Welfare, Miyazaki 8828508, Japan.
RP Nomura, T (reprint author), Ryukoku Univ, Dept Media Informat, Otsu, Shiga
5202194, Japan.
EM nomura@rins.ryukoku.ac.jp; kanda@atr.jp; suzukirt@h9.dion.ne.jp;
dz102550@nifty.ne.jp
RI Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825
CR Bartneck C, 2007, AI SOC, V21, P217, DOI 10.1007/s00146-006-0052-7
BROSNAN M., 1998, TECHNOPHOBIA PSYCHOL
CHAPLIN JP, 1991, DICT PSYCHOL
Dautenhahn K., 2002, SOCIALLY INTELLIGENT
DRUIN A, 2000, ROBOTS KIDS EXPLOITI
FRIEDMAN B, P CHI 2003, P273
Goetz J, 2003, RO-MAN 2003: 12TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, PROCEEDINGS, P55
Hirata K., 1990, B AICHI U ED, V39, P203
Ishiguro H, 2001, IND ROBOT, V28, P498, DOI 10.1108/01439910110410051
JOINSON AN, 2004, UNDERSTANDING PSYCHO
Kahn Jr P. H., 2006, P 15 IEEE INT S ROB, P364, DOI DOI
10.1109/R0MAN.2006.314461
KANDA T, 2005, P IEEE RSJ INT C INT, P527
Kanda T., 2003, P INT JOINT C ART IN, P177
Kanda T., 2005, P IEEE RSJ INT C INT, P62
Kidd CD, 2004, P IEEE RSJ INT C INT, P3559, DOI DOI 10.1109/IROS.2004.1389967
Mutlu B., 2006, P 15 IEEE INT S ROB, P74, DOI DOI 10.1109/ROMAN.2006.314397
Nomura T., 2005, P 14 IEEE INT WORKSH, P125, DOI DOI 10.1109/R0MAN.2005.1513768
Nomura T., 2006, P 15 IEEE INT S ROB, P372
Nomura T., 2006, P AAAI 06 WORKSH HUM, P29
Nomura T, 2006, AI SOC, V20, P138, DOI 10.1007/s00146-005-0012-7
Nomura T, 2006, INTERACT STUD, V7, P437, DOI 10.1075/is.7.3.14nom
Pribyl CB, 1998, JPN PSYCHOL RES, V40, P47, DOI 10.1111/1468-5884.00074
Raub A., 1981, THESIS U PENNSYLVANI
REEVES B, 2001, MEDIA EQUATION
SAKAMOTO M, 1998, JAPANESE J PSYCHOL, V48, P491
Sisbot E. A., 2005, P IEEE RAS INT C HUM, P181
Spielberger C.D., 1970, MANUAL STATE TRAIT A
Walters M. L., 2005, P IEEE INT WORKSH RO, P347
NR 28
TC 46
Z9 47
U1 2
U2 17
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD APR
PY 2008
VL 24
IS 2
BP 442
EP 451
DI 10.1109/TRO.2007.914004
PG 10
WC Robotics
SC Robotics
GA 284KP
UT WOS:000254705600017
DA 2018-01-22
ER

PT J
AU Hagita, N
Ishiguro, H
AF Hagita, Norihiro
Ishiguro, Hiroshi
TI Special issue on interactive humanoid robots; Guest editors: N. Hagita
and H. Ishiguro
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Editorial Material
C1 [Ishiguro, Hiroshi] Osaka Univ, Suita, Osaka 565, Japan.
NR 0
TC 1
Z9 1
U1 0
U2 1
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2008
VL 5
IS 1
BP 1
EP 2
PG 2
WC Robotics
SC Robotics
GA 286RT
UT WOS:000254864400001
DA 2018-01-22
ER

PT J
AU Moren, J
Ude, A
Koene, A
Cheng, G
AF Moren, Jan
Ude, Ales
Koene, Ansgar
Cheng, Gordon
TI Biologically based top-down attention modulation for humanoid
interactions
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE attention; low-level vision; humanoid interaction
ID EXTRASTRIATE AREA V4; VISUAL-ATTENTION; MODEL; SELECTION
AB An adaptive perception system enables humanoid robots to interact with humans
and their surroundings in a meaningful context-dependent manner. An important
foundation for visual perception is the selectivity of early vision processes that
enables the system to filter out low-level unimportant information while attending
to features indicated as important by higher-level processes by way of top-down
modulation. We present a novel way to integrate top-down and bottom-up processing
for achieving such attention-based filtering. We specifically consider the case
where the top-down target is not the most salient in any of the used submodalities.
C1 [Moren, Jan; Koene, Ansgar; Cheng, Gordon] NICT, Knowledge Creating Commun Res
Ctr, Seika, Kyoto 6190288, Japan.
[Moren, Jan; Ude, Ales; Koene, Ansgar; Cheng, Gordon] ATR Computat Neurosci
Labs, Dept Humanoid Robot & Computat Neurosci, Seika, Kyoto 6190288, Japan.
[Ude, Ales] Jozef Stefan Inst, Dept Automat Biocybernet & Robot, Ljubljana 1111,
Slovenia.
[Cheng, Gordon] JST ICORP Computat Brain Project, Kawaguchi, Saitama, Japan.
RP Moren, J (reprint author), NICT, Knowledge Creating Commun Res Ctr, 2-2-2
Hikaridai, Seika, Kyoto 6190288, Japan.
EM janmoren@atr.jp; aude@atr.jp; ansgar@atr.jp; gordon@atr.jp
RI Koene, Ansgar/P-8404-2014
OI Koene, Ansgar/0000-0002-7608-623X; Cheng, Gordon/0000-0003-0770-8717;
Ude, Ales/0000-0003-3677-3972
CR BALKENIUS C, 2004, ICPR WORKSH LEARN AD
BALKENIUS C, 2000, ANIMALS ANIMATS, V6, P256
Cave KR, 1999, PSYCHOL RES-PSYCH FO, V62, P182, DOI 10.1007/s004260050050
Chapman D., 1991, VISION INSTRUCTION A
Cheng GD, 2001, ROBOT AUTON SYST, V37, P161, DOI 10.1016/S0921-8890(01)00156-7
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Deco G, 2004, VISION RES, V44, P621, DOI 10.1016/j.visres.2003.09.037
Driscoll JA, 1998, 1998 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS - PROCEEDINGS, VOLS 1-3, P1968, DOI 10.1109/IROS.1998.724894
Fecteau JH, 2006, TRENDS COGN SCI, V10, P382, DOI 10.1016/j.tics.2006.06.011
Grigorescu SE, 2002, IEEE T IMAGE PROCESS, V11, P1160, DOI
10.1109/TIP.2002.804262
HILLYARD SA, 1994, J NEUROSCI, V14, P2190
Hopfinger JB, 2000, NAT NEUROSCI, V3, P284
Itti L, 1998, IEEE T PATTERN ANAL, V20, P1254, DOI 10.1109/34.730558
Itti L, 2003, PROC SPIE, V5200, P64, DOI 10.1117/12.512618
Itti Laurent, 2005, P576, DOI 10.1016/B978-012375731-9/50098-7
KOCH C, 1985, HUM NEUROBIOL, V4, P219
Lucas B. D., 1981, INT JOINT C ART INT, P674
MOREN J, 2002, THESIS LUND U COGNIT, V93
MOTTER BC, 1994, J NEUROSCI, V14, P2178
MOTTER BC, 1994, J NEUROSCI, V14, P2190
Muller NG, 2003, J NEUROSCI, V23, P9812
Itti L, 2006, CVPR, P2049
Oliva A., 2003, P IEEE RSJ INT C IM, p[418, 419]
Saenz M, 2002, NAT NEUROSCI, V5, P631, DOI 10.1038/nn876
Stasse O, 2000, LECT NOTES COMPUT SC, V1811, P150
TREISMAN AM, 1980, COGNITIVE PSYCHOL, V12, P97, DOI 10.1016/0010-0285(80)90005-5
TSOTSOS JK, 1995, ARTIF INTELL, V78, P507, DOI 10.1016/0004-3702(95)00025-9
Tsotsos J. K., 1989, P INT JOINT C ART IN, P1571
UDE A, 2005, P IEEE RAS INT C HUM, P381
Ude A., 2007, HUMANOID ROBOTS HUMA, P423
NR 30
TC 11
Z9 11
U1 0
U2 4
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2008
VL 5
IS 1
BP 3
EP 24
DI 10.1142/S0219843608001285
PG 22
WC Robotics
SC Robotics
GA 286RT
UT WOS:000254864400002
DA 2018-01-22
ER

PT J
AU Nomura, T
Suzuki, T
Kanda, T
Han, J
Shin, N
Burke, J
Kato, K
AF Nomura, Tatsuya
Suzuki, Tomohiro
Kanda, Takayuki
Han, Jeonghye
Shin, Namin
Burke, Jennifer
Kato, Kensuke
TI What people assume about humanoid and animal-type robots: Cross-cultural
analysis between Japan, Korea, and the United States
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE user studies; cross-cultural research; assumptions about robots
AB To broadly explore the rationale behind more socially acceptable robot design
and to investigate the psychological aspects of social acceptance of robotics, a
cross-cultural research instrument, the Robot Assumptions Questionnaire (RAQ) was
administered to the university students in Japan, Korea, and the United States,
focusing on five factors relating to humanoid and animal-type robots: relative
autonomy, social relationship with humans, emotional aspects, roles assumed, and
images held. As a result, it was found that (1) Students in Japan, Korea, and the
United States tend to assume that humanoid robots perform concrete tasks in
society, and that animal-type robots play a pet- or toy-like role; (2) Japanese
students tend to more strongly assume that humanoid robots have somewhat human
characteristics and that their roles are related to social activities including
communication, than do the Korean and the US students; (3) Korean students tend to
have more negative attitudes toward the social influences of robots, in particular,
humanoid robots, than do the Japanese students, while more strongly assuming that
robots' roles are related to medical fields than do the Japanese students, and (4)
Students in the USA tend to have both more positive and more negative images of
robots than do Japanese students, while more weakly assuming robots as blasphemous
of nature than do Japanese and Korean students. In addition, the paper discusses
some engineering implications of these research results.
C1 [Nomura, Tatsuya] Ryukoku Univ, Dept Med Informat, Shiga 5202194, Japan.
[Suzuki, Tomohiro] Toyo Univ, Grad Sch Sociol, Tokyo 1128506, Japan.
[Kanda, Takayuki] ATR Intelligent Robot & Commun Labs, Kyoto 6190288, Japan.
[Han, Jeonghye] Cheongju Natl Univ Educ, Dept Comp Educ, Chungbuk 361712, South
Korea.
[Shin, Namin] Dongguk Univ, Dept Educ, Seoul 100715, South Korea.
[Burke, Jennifer] Univ S Florida, Inst Safety Secur Rescue Technol, Tampa, FL
33620 USA.
[Kato, Kensuke] Kyushu Univ Hlth & Welf, Dept Clin Water, Miyazaki 8828508,
Japan.
RP Nomura, T (reprint author), Ryukoku Univ, Dept Med Informat, Shiga 5202194,
Japan.
EM nomura@rins.ryukoku.ac.jp; suzukirt@h9.dion.ne.jp; kanda@atr.jp;
hanjh@cje.ac.kr; naminshin@dongguk.edu; jennifer.l.burke@gmail.com;
DZL02550@nifty.ne.jp
RI Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825
CR Bartneck C, 2007, AI SOC, V21, P217, DOI 10.1007/s00146-006-0052-7
DUVAL S, 2006, INT C HYBR INF TECHN, P227
Forlizzi J., 2006, 1st Annual Conference on Human-Robot Interaction, P258
Goetz J., 2003, IEEE INT WORKSH ROB, P55
Gould EW, 2000, IEEE PROFESSIONAL COMMUNICATION SOCIETY INTERNATIONAL
PROFESSIONAL COMMUNICATION CONFERENCE AND ACM SPECIAL INTEREST GROUP ON
DOCUMENTATION CONFERENCE, P161, DOI 10.1109/IPCC.2000.887273
Kaplan F, 2004, INT J HUM ROBOT, V1, P465, DOI DOI 10.1142/S0219843604000289
Kidd C. D., 2004, IEEE RSJ INT C INT R, P3559
Lynn L. H., 2003, AI & Society, V17, P241, DOI 10.1007/s00146-003-0280-z
MACDORMAN KF, 2006, 5 INT C COGN SCI VAN, P48
MAWHINNEY CH, 1993, C COMP PERS RES MISS, P356
MUTLU B, 2006, IEEE INT S ROB HUM I, P74
NOMURA T, 2006, IEEE INT S ROB HUM I, P372
NOMURA T, 2006, JOINT 3 INT C SOFT C, P1476
NOMURA T, 2005, IEEE INT WS ROBOTS H, P125
Nomura T., 2006, AAAI 06 WS HUM IMPL, P29
Nomura T, 2007, AI SOC, V21, P167, DOI 10.1007/s00146-006-0053-6
OESTREICHER L, 2006, IEEE INT S ROB HUM I, P91
Scopelliti M, 2004, DESIGNING A MORE INCLUSIVE WORLD, P257
Shibata T, 2003, RO-MAN 2003: 12TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND
HUMAN INTERACTIVE COMMUNICATION, PROCEEDINGS, P397
Shibata T, 2002, IEEE ROMAN 2002, PROCEEDINGS, P23, DOI
10.1109/ROMAN.2002.1045592
SHIBATA T, 2004, INT WS ROBOT HUMAN I, P135
WEIL MM, 1995, COMPUT HUM BEHAV, V11, P95, DOI 10.1016/0747-5632(94)00026-E
Yamamoto S, 1983, ESPRIT AUJOURD HUI, V187, P136
NR 23
TC 18
Z9 18
U1 4
U2 12
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2008
VL 5
IS 1
BP 25
EP 46
DI 10.1142/S0219843608001297
PG 22
WC Robotics
SC Robotics
GA 286RT
UT WOS:000254864400003
DA 2018-01-22
ER

PT J
AU Komiya, K
Igarashi, T
Suzuki, H
Hirabayashi, Y
Waechter, J
Seo, N
AF Komiya, Kaori
Igarashi, Takashi
Suzuki, Hideo
Hirabayashi, Yoshihiro
Waechter, Jason
Seo, Norimasa
TI In vitro study of patient's and physician's radiation exposure in the
performance of epiduroscopy
SO REGIONAL ANESTHESIA AND PAIN MEDICINE
LA English
DT Article
DE epiduroscopy; radiation exposure; fluoroscopy
ID INTERVENTIONAL RADIOLOGICAL PROCEDURES; VERTEBROPLASTY; FLUOROSCOPY;
STAFF
AB Background and Objectives: Epiduroscopy is a minimally invasive diagnostic and
therapeutic technique, useful in the management of patients with back and leg pain.
However, the dose of radiation exposure by fluoroscopy during epiduroscopy is not
known. The endpoint of our study was to evaluate the amount of radiation exposure
for patients and health care workers during epiduroscopy.
Methods: First, we measured the radiation dose during a 10-minute fluoroscopy
exposure in humanoid models, which substituted for the patient and the physician.
Second, we measured the duration of fluoroscopy during our clinical epiduroscopy in
14 patients and observed for radiation injury in these patients.
Results: in the humanoid models, the patient model skin exposure dose over a 10-
minute period was measured as 238 mGy. The physician's exposure dose for 10 minutes
was measured as 0.67 mGy outside the lead apron and 0.0084 mGy inside the lead
apron. For the clinical epiduroscopic procedures, the average duration of
fluoroscopy was 9 minutes and 26 seconds. No skin injuries in the patients were
observed at a I-month postprocedure assessment.
Conclusions: The radiological dosages in the patient humanoid model were less
than the threshold doses that could lead to organ injuries for 1 epiduroscopic
procedure. However, care should be taken for cumulative exposures in repeated
procedures.
C1 [Komiya, Kaori; Igarashi, Takashi; Suzuki, Hideo; Hirabayashi, Yoshihiro;
Waechter, Jason; Seo, Norimasa] Jichi Med Univ, Dept Anesthesiol & Crit Care Med,
Shimotsuke, Tochigi 3290498, Japan.
RP Komiya, K (reprint author), Jichi Med Univ, Dept Anesthesiol & Crit Care Med,
3311-1 Yakushiji, Shimotsuke, Tochigi 3290498, Japan.
EM komiya@jichi.ac.jp
CR Amy B, 2004, LANCET, V363, P345
Botwin KP, 2002, ARCH PHYS MED REHAB, V83, P697, DOI 10.1053/apmr.2002.32439
Crawley MT, 2004, BRIT J RADIOL, V77, P654, DOI 10.1259/bjr/22832251
Cusma JT, 1999, J AM COLL CARDIOL, V33, P427, DOI 10.1016/S0735-1097(98)00591-9
Fitousi NT, 2006, SPINE, V31, pE884, DOI 10.1097/01.brs.0000244586.02151.18
Geurts JW, 2002, REGION ANESTH PAIN M, V27, P343, DOI 10.1053/rapm.2001.27175
Harstall R, 2005, SPINE, V30, P1893, DOI 10.1097/01.brs.0000174121.48306.16
*INT COMM RAD PROT, 1990, ICRP PUBLICATION, V60
Lickfett L, 2004, CIRCULATION, V110, P3003, DOI
10.1161/01.CIR.0000146952.49223.11
Mahadevappa M, 2001, RADIOGRAPHICS, V21, P1033
MANCHIKANTI L, 2003, BMC ANESTHESIOL, V3, P2, DOI DOI 10.1186/1471-2253-3-2
McParland BJ, 1998, BRIT J RADIOL, V71, P1288, DOI 10.1259/bjr.71.852.10319003
McParland BJ, 1998, BRIT J RADIOL, V71, P175, DOI 10.1259/bjr.71.842.9579182
Mehdizade A, 2004, NEURORADIOLOGY, V46, P243, DOI 10.1007/s00234-003-1156-0
SABERSKI LR, 1995, ANESTH ANALG, V80, P839, DOI 10.1097/00000539-199504000-00035
Shope TB, 1996, RADIOGRAPHICS, V16, P1195, DOI
10.1148/radiographics.16.5.8888398
TAKASHI M, 1995, HUMAN LIFE ENV RAD C, P85
Vano E, 1998, BRIT J RADIOL, V71, P954, DOI 10.1259/bjr.71.849.10195011
Wagner LK, 1999, RADIOLOGY, V213, P773, DOI 10.1148/radiology.213.3.r99dc16773
NR 19
TC 6
Z9 6
U1 0
U2 1
PU W B SAUNDERS CO-ELSEVIER INC
PI PHILADELPHIA
PA 1600 JOHN F KENNEDY BOULEVARD, STE 1800, PHILADELPHIA, PA 19103-2899 USA
SN 1098-7339
J9 REGION ANESTH PAIN M
JI Region. Anesth. Pain Med.
PD MAR-APR
PY 2008
VL 33
IS 2
BP 98
EP 101
DI 10.1016/j.rapm.2007.07.015
PG 4
WC Anesthesiology
SC Anesthesiology
GA 271II
UT WOS:000253781700002
PM 18299088
DA 2018-01-22
ER

PT J
AU Kanda, T
Nabe, S
Hiraki, K
Ishiguro, H
Hagita, N
AF Kanda, Takayuki
Nabe, Shogo
Hiraki, Kazuo
Ishiguro, Hiroshi
Hagita, Norihiro
TI Human friendship estimation model for communication robots
SO AUTONOMOUS ROBOTS
LA English
DT Article; Proceedings Paper
CT Workshop on Robotics - Science and Systems
CY AUG 16-19, 2006
CL Univ Penn, Philadelphia, PA
HO Univ Penn
DE communication robot; friendship estimation; field trial; video analysis
AB Based on the analysis of non-verbal inter-human interaction, this paper proposes
a model for estimating human friendships in the presence of a humanoid robot. Our
previous study in an elementary school provided rich video data of two months of
inter-human interaction in the presence of a humanoid robot. Such data are
particularly useful for developing a robot's social ability: a friendship
estimation capability. We analyzed the video based on an observation method to
analyze the interaction among children and the robot. From their non-verbal
interactions, several important factors for friendship estimation were retrieved,
including touch, gaze, co-presence, and distance. Gender was also considered a
factor in the model, since gender differences were observed in non-verbal
interactions. The model discriminated between friendly and non-friendly
relationships among the children with 74.5% accuracy for boys and 83.8% for girls.
C1 [Kanda, Takayuki; Nabe, Shogo; Hiraki, Kazuo; Ishiguro, Hiroshi; Hagita,
Norihiro] ATR, Kyoto 6190288, Japan.
[Nabe, Shogo; Hiraki, Kazuo] Univ Tokyo, Tokyo, Japan.
[Ishiguro, Hiroshi] Osaka Univ, Osaka, Japan.
RP Kanda, T (reprint author), ATR, Kyoto 6190288, Japan.
EM kanda@atr.jp
RI Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825
CR Argyle M., 1985, ANATOMY RELATIONSHIP
CHOUDHURY T, 2003, IEEE INT C UB COMP
Ekman P., 1978, MANUAL FACIAL ACTION
EVELAND JD, 1986, P 1986 ACM C COMP SU, P91
Eysenck H. J, 1972, ENCY PSYCHOL
Hall Edward T., 1996, HIDDEN DIMENSION
Ishiguro H, 2003, SPRINGER TRAC ADV RO, V6, P179
KAIHO H, 2006, ENCY PSYCHOL
Kanda T, 2004, P IEEE RSJ INT C INT, P2215
KANDA T, 2002, IEEE RSJ INT C INT R, P1265
KANDA T, 2004, INT C IND ENG APPL A
Kozima H., 2005, IEEE INT WORKSH ROB, P341, DOI [DOI 10.1109/ROMAN.2005.1513802,
10.1109/ROMAN.2005.1513802]
Ladd G.W., 1990, PEER REJECTION CHILD, P90
McConnell S., 1986, CHILDRENS SOCIAL BEH, P215
NOMURA S, 2002, 2002 INT S APPL INTE, P312
Shibata T, 2001, IEEE INT CONF ROBOT, P2572, DOI 10.1109/ROBOT.2001.933010
NR 16
TC 8
Z9 8
U1 0
U2 3
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 0929-5593
EI 1573-7527
J9 AUTON ROBOT
JI Auton. Robot.
PD FEB
PY 2008
VL 24
IS 2
BP 135
EP 145
DI 10.1007/s10514-007-9052-9
PG 11
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA 252XS
UT WOS:000252481600003
DA 2018-01-22
ER

PT J
AU Morimoto, J
Endo, G
Nakanishi, J
Cheng, G
AF Morimoto, Jun
Endo, Gen
Nakanishi, Jun
Cheng, Gordon
TI A biologically inspired biped locomotion strategy for humanoid robots:
Modulation of sinusoidal patterns by a coupled oscillator model
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE biologically inspired approach; biped walking; central pattern
generator; coupled oscillator; humanoid robots
ID WALKING
AB Biological systems seem to have a simpler but more robust locomotion strategy
than that of the existing biped walking controllers for humanoid robots. We show
that a humanoid robot can step and walk using simple sinusoidal desired joint
trajectories with their phase adjusted by a coupled oscillator model. We use the
center-of-pressure location and velocity to detect the phase of the lateral robot
dynamics. This phase information is used to modulate the desired joint
trajectories. We do not explicitly use dynamical parameters of the humanoid robot.
lye hypothesize that a similar mechanism may exist in biological systems. We
applied the proposed biologically inspired control strategy to our newly developed
human-sized humanoid robot computational brain (CB) and a small size humanoid
robot, enabling them to generate successful stepping and walking patterns.
C1 [Morimoto, Jun; Nakanishi, Jun; Cheng, Gordon] Japan Sci & Technol Agcy,
Kawaguchi, Saitama 3320012, Japan.
[Morimoto, Jun; Nakanishi, Jun; Cheng, Gordon] ATR, Comp Neurosci Labs, Kyoto
6190288, Japan.
[Endo, Gen] Tokyo Inst Technol, Dept Mech & Aerosp Engn, Tokyo 1528552, Japan.
RP Morimoto, J (reprint author), Japan Sci & Technol Agcy, Kawaguchi, Saitama
3320012, Japan.
EM xmorimo@atr.jp; gendo@sms.titech.ac.jp; jun@atr.jp; gordon@atr.jp
OI Cheng, Gordon/0000-0003-0770-8717
CR Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Doi M, 2004, IEEE INT CONF ROBOT, P3049, DOI 10.1109/ROBOT.2004.1307525
ENDO G, P IEEE 2004 INT C RO, P3063
ENDO G, 2005, IEEE INT C ROB AUT I, P598
Fukuoka Y, 2003, INT J ROBOT RES, V22, P187, DOI 10.1177/0278364903022003004
GRILLNER S, 1975, PHYSIOL REV, V55, P247
GRILLNER S, 1985, SCIENCE, V228, P143, DOI 10.1126/science.3975635
HIRAI K, P 1998 IEEE INT C RO, P160
HYON S, 2005, IEEE INT C ROB AUT B, P1456
Kajita S, 2003, ADV ROBOTICS, V17, P131, DOI 10.1163/156855303321165097
McMahon TA, 1984, MUSCLES REFLEXES LOC
MIURA H, 1984, INT J ROBOT RES, V3, P60, DOI 10.1177/027836498400300206
Miyakoshi S, 1998, IEEE RSJ INT C INT R, V1, P84
Morimoto J, 2003, ADV NEURAL INFORMATI, V15, P1563
Morimoto J, 2007, IEEE ROBOT AUTOM MAG, V14, P41, DOI 10.1109/MRA.2007.380654
Nagasaka K., 1999, P 17 ANN C ROB SOC J, P1193
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Perry J, 1992, GAIT ANAL NORMAL PAT
SHIK ML, 1976, PHYSIOL REV, V56, P465
Strogatz S.H., 1994, NONLINEAR DYNAMICS C
Sugiyama T, 2002, INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, VOLS I AND
II, PROCEEDINGS, P1404, DOI 10.1109/CIE.2002.1186269
TAGA G, 1991, BIOL CYBERN, V65, P147, DOI 10.1007/BF00198086
Tsuchiya K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1745
Westervelt ER, 2004, INT J ROBOT RES, V23, P559, DOI 10.1177/0278364904044410
WINTER DA, 1990, BIOMECHANIICS MOTOR
YAMASAKI N, 1999, ENCY FOOT
NR 26
TC 56
Z9 60
U1 0
U2 10
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD FEB
PY 2008
VL 24
IS 1
BP 185
EP 191
DI 10.1109/TRO.2008.915457
PG 7
WC Robotics
SC Robotics
GA 271LM
UT WOS:000253789900017
DA 2018-01-22
ER
PT J
AU Tani, J
Nishimoto, R
Namikawa, J
Ito, M
AF Tani, Jun
Nishimoto, Ryu
Namikawa, Jun
Ito, Masato
TI Codevelopmental learning between human and humanoid robot using a
dynamic neural-network model
SO IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS
LA English
DT Article
DE compositionality; continuous-time recurrent neural network (CTRNN);
development learning; humanoid robot
ID PARIETAL CORTEX; SYSTEMS; ORGANIZATION; INTENTION; APRAXIA; TASK
AB This paper examines characteristics of interactive learning between human tutors
and a robot having a dynamic neural-network model, which is inspired by human
parietal cortex functions. A humanoid robot, with a recurrent neural network that
has a hierarchical structure, learns to manipulate objects. Robots learn tasks in
repeated self-trials with the assistance of human interaction, which provides
physical guidance until the tasks are mastered and learning is consolidated within
the neural networks. Experimental results and the analyses showed the following: 1)
codevelopmental shaping of task behaviors stems from interactions between the robot
and a tutor; 2) dynamic structures for articulating and sequencing of behavior
primitives are self-organized in the hierarchically organized network; and 3) such
structures can afford both generalization and context dependency in generating
skilled behaviors.
C1 [Tani, Jun; Nishimoto, Ryu; Namikawa, Jun] Brain Sci Inst, RIKEN, Wako, Saitama
3510198, Japan.
[Ito, Masato] Sony Corp, Tokyo 1698050, Japan.
RP Tani, J (reprint author), Brain Sci Inst, RIKEN, Wako, Saitama 3510198, Japan.
CR ARBIB MA, 1981, HDB PHYSL NERVOUS SY, V2, P1448
Atkeson C. G., 1997, P 14 INT C MACH LEAR, P12
BERTA A, 1987, Bulletin of the Florida State Museum Biological Sciences, V31, P1
Bianco R, 2004, ADAPT BEHAV, V12, P37, DOI 10.1177/105971230401200102
Calinon S, 2007, IEEE T SYST MAN CY B, V37, P286, DOI 10.1109/TSMCB.2006.886952
Demiris J, 2002, FROM ANIM ANIMAT, P327
DEMIRIS Y, 2002, IMITATION ANIMALS AR, P1448
Doya K, 2002, NEURAL COMPUT, V14, P1347, DOI 10.1162/089976602753712972
DOYA K, 1989, P 1989 INT JOINT C N, V1, P27
Duhamel JR, 1998, J NEUROPHYSIOL, V79, P126
Fogassi L, 2005, SCIENCE, V308, P662, DOI 10.1126/science.1106138
GESCHWIND N, 1962, NEUROLOGY, V12, P675, DOI 10.1212/WNL.12.10.675
HEILMAN KM, 1973, BRAIN, V96, P861, DOI 10.1093/brain/96.4.861
Horn B, 1986, ROBOT VISION
Husserl E., 1964, PHENOMENOLOGY INTERN
Iacoboni M, 2006, NEUROPSYCHOLOGIA, V44, P2691, DOI
10.1016/j.neuropsychologia.2006.04.029
IJSPEERT AJ, 2003, ADV NEURAL INFORMATI, V17, P1547
Inamura T, 2001, IEEE INT CONF ROBOT, P4208, DOI 10.1109/ROBOT.2001.933275
ITO S, 1993, PEDIATR INFECT DIS J, V12, P2, DOI 10.1097/00006454-199301000-00002
Jacobs RA, 1991, NEURAL COMPUT, V3, P79, DOI 10.1162/neco.1991.3.1.79
JEANNEROD M, 1994, BEHAV BRAIN SCI, V17, P187, DOI 10.1017/S0140525X00034026
Jordan M. I., 1986, P 8 ANN C COGN SCI S, P531
KAPLAN F, 2004, P 3 ICDL DEV SOC BRA, P2
KAWATO M, 1987, BIOL CYBERN, V57, P169, DOI 10.1007/BF00364149
KHATIB O, 1986, INT J ROBOT RES, V5, P90, DOI 10.1177/027836498600500106
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
Liepmann H, 1920, ERGEBNISSE GESAMTEN, P516
LIN LJ, 1989, THESIS CARNEGIE MELL
LIN LJ, 1992, P 2 INT C SIM AD BEH, P271
Lungarella M, 2003, CONNECT SCI, V15, P151, DOI 10.1080/09540090310001655110
Luria A. R., 1973, WORKING BRAIN
MCDONALD S, 1994, BRAIN COGNITION, V25, P250, DOI 10.1006/brcg.1994.1035
Metta G, 1999, NEURAL NETWORKS, V12, P1413, DOI 10.1016/S0893-6080(99)00070-2
Nagai Y, 2003, CONNECT SCI, V15, P211, DOI 10.1080/09540090310001655101
NAMIKAWA J, 2007, LEARNING SEGMENT TEM
OHSHIMA F, 1998, J JAPANESE NEUROPSYC, V14, P42
Paine RW, 2005, ADAPT BEHAV, V13, P211, DOI 10.1177/105971230501300303
PAUL RP, 1982, ROBOT MANIPULATORS
Rumelhart D. E., 1986, PARALLEL DISTRIBUTED
Rumelhart D.E., 1986, PARALLEL DISTRIBUTED, V1, P318
SAKATA H, 1995, CEREB CORTEX, V5, P429, DOI 10.1093/cercor/5.5.429
Tani J, 1996, IEEE T SYST MAN CY B, V26, P421, DOI 10.1109/3477.499793
Tani J, 2004, J CONSCIOUSNESS STUD, V11, P5
Tani J, 1999, NEURAL NETWORKS, V12, P1131, DOI 10.1016/S0893-6080(99)00060-X
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
Tani J, 2003, NEURAL NETWORKS, V16, P11, DOI 10.1016/S0893-6080(02)00214-9
Tani J., 1998, J CONSCIOUSNESS STUD, V5, P516
TANI J, 1998, ANIMALS ANIMATS, V5
Williams RJ, 1989, NEURAL COMPUT, V1, P270, DOI 10.1162/neco.1989.1.2.270
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
Yokochi H, 2003, SOMATOSENS MOT RES, V20, P115, DOI 10.1080/0899022031000105145
NR 51
TC 27
Z9 27
U1 1
U2 8
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1083-4419
EI 1941-0492
J9 IEEE T SYST MAN CY B
JI IEEE Trans. Syst. Man Cybern. Part B-Cybern.
PD FEB
PY 2008
VL 38
IS 1
BP 43
EP 59
DI 10.1109/TSMCB.2007.907738
PG 17
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Computer Science, Cybernetics
SC Automation & Control Systems; Computer Science
GA 254TU
UT WOS:000252611700006
PM 18270081
DA 2018-01-22
ER

PT J
AU Peters, J
Schaal, S
AF Peters, Jan
Schaal, Steffan
TI Learning to control in operational space
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE operational space control; robot learning; reinforcement learning;
reward-weighted regression
ID MITSUBISHI PA-10; MANIPULATORS; ARM
AB One of the most general frameworks for phrasing control problems for complex,
redundant robots is operational-space control. However, while this framework is of
essential importance for robotics and well understood from an analytical point of
view, it can be prohibitively hard to achieve accurate control in the face of
modeling errors, which are inevitable in complex robots (e.g. humanoid robots). In
this paper, we suggest a learning approach for operational-space control as a
direct inverse model learning problem. A first important insight for this paper is
that a physically correct solution to the inverse problem with redundant degrees of
freedom doer exist when learning of the inverse map is performed in a suitable
piecewise linear way. The second crucial component of our work is based on the
insight that many operational-space controllers can be understood in terms of a
constrained optimal control problem. The cost,function associated with this optimal
control problem allows us to formulate a learning algorithm that automatically
synthesizes a globally consistent desired resolution of redundancy while learning
the operational-space controller. From the machine learning point of view, this
learning problem corresponds to a reinforcement learning problem that maximizes an
immediate reward. We employ an expectation-maximization policy search algorithm in
order to solve this problem. Evaluations on a three degrees-of-freedom robot arm
are used to illustrate the suggested approach. The application to a physically
realistic simulator of the anthropomorphic SARCOS Master arm demonstrates
feasibility, for complex high degree-of-freedom robots. We also show that the
proposed method works in the setting of learning resolved motion rate control on a
real, physical Mitsubishi PA-10 medical robotics arm.
C1 [Peters, Jan] Univ Tubingen, Max Planck Inst Biol Cybernet, D-72076 Tubingen,
Germany.
[Peters, Jan; Schaal, Steffan] Univ So Calif, Los Angeles, CA 90089 USA.
[Schaal, Steffan] ATR Computat Neurosci Lab, Kyoto 6190288, Japan.
RP Peters, J (reprint author), Univ Tubingen, Max Planck Inst Biol Cybernet,
Spemannstr 38, D-72076 Tubingen, Germany.
EM mail@jan-peters.net
RI Peters, Jan/D-5068-2009
OI Peters, Jan/0000-0002-5266-8091
CR Atkeson CG, 1997, ARTIF INTELL REV, V11, P75, DOI 10.1023/A:1006511328852
Atkeson CG, 1997, ARTIF INTELL REV, V11, P11, DOI 10.1023/A:1006559212014
Bruyninckx H., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P2563, DOI 10.1109/ROBOT.2000.846414
BULLOCK D, 1993, J COGNITIVE NEUROSCI, V5, P408, DOI 10.1162/jocn.1993.5.4.408
Dayan P, 1997, NEURAL COMPUT, V9, P271, DOI 10.1162/neco.1997.9.2.271
DELUCA A, 1991, P IEEE INT C ROB AUT
DOTY KL, 1993, INT J ROBOT RES, V12, P1, DOI 10.1177/027836499301200101
DSOUZA A, 2001, P IEEE RSJ INT C INT
Farrell J., 2006, ADAPTIVE APPROXIMATI
GUEZ A, 1988, P IEEE INT C NEUR NE, P102
HARUNO M, 1999, ADV NEURAL INFORM PR
HSU P, 1989, J ROBOTIC SYST, V6, P133, DOI 10.1002/rob.4620060203
ISRAEL A. B., 2003, GEN INVERSES THEORY
JORDAN MI, 1992, COGNITIVE SCI, V16, P307, DOI 10.1016/0364-0213(92)90036-T
Kaebling L. P., 1996, J ARTIFICIAL INTELLI, V4, P237
Kennedy CW, 2005, IEEE-ASME T MECH, V10, P263, DOI 10.1109/TMECH.2005.848290
KHATIB O, 1987, IEEE T ROBOTIC AUTOM, V3, P43, DOI 10.1109/JRA.1987.1087068
MacKay D. J. C., 2003, INFORM THEORY INFERE
Nakamura Y, 1991, ADV ROBOTICS REDUNDA
Nakanishi J, 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots
and Systems, Vols 1-4, P1575
NAKANISHI J, 2004, P IEEE INT C ROB AUT, P2647
Nakanishi J, 2007, 2007 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-9, P2526
NGUYENTUONG D, 2007, THESIS MAX PLANCK I
Park J., 1999, Proceedings 1999 IEEE/RSJ International Conference on Intelligent
Robots and Systems. Human and Environment Friendly Robots with High Intelligence
and Emotional Quotients (Cat. No.99CH36289), P1495, DOI 10.1109/IROS.1999.811690
Peters J, 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and
Systems, Vols 1-4, P3522, DOI 10.1109/IROS.2005.1545516
Peters J., 2007, P INT C MACH LEARN I
Peters J, 2008, IEEE INT CONF ROBOT, P2872, DOI 10.1109/ROBOT.2008.4543645
Rasmussen CE, 2005, ADAPT COMPUT MACH LE, P1
Schaal S, 2002, APPL INTELL, V17, P49, DOI 10.1023/A:1015727715131
Scholkopf B., 2002, LEARNING KERNELS SUP
SCIAVICCO L, 2007, MODELING CONTROL ROB
SENTIS L, 2005, P IEEE INT C ROB AUT, P1730
SNELSON E, 2007, P ART INT STAT AISTA, V11
Spall JC, 2003, INTRO STOCHASTIC SEA
Spong M. W., 2006, ROBOT MODELLING CONT
TEVATIA G, 2000, P INT C ROB AUT ICRA
TING JA, 2006, P ROB SCI SYST C
UDWADIA FE, 2005, COMMUNICATION
Udwadia F. E., 1996, ANAL DYNAMICS NEW AP
Vijayakumar S, 2005, NEURAL COMPUT, V17, P2602, DOI 10.1162/089976605774320557
VIJAYAKUMAR S, 2002, AUTON ROBOT, V12, P59
NR 41
TC 77
Z9 77
U1 2
U2 17
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD FEB
PY 2008
VL 27
IS 2
BP 197
EP 212
DI 10.1177/0278364907087548
PG 16
WC Robotics
SC Robotics
GA 264VI
UT WOS:000253318100004
DA 2018-01-22
ER

PT J
AU Endo, G
Morimoto, J
Matsubara, T
Nakanishi, J
Cheng, G
AF Endo, Gen
Morimoto, Jun
Matsubara, Takamitsu
Nakanishi, Jun
Cheng, Gordon
TI Learning CPG-based biped locomotion with a policy gradient method:
Application to a humanoid robot
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE humanoid robots; reinforcement learning; bipedal locomotion; central
pattern generator
ID DYNAMIC WALKING
AB In this paper we describe a learning framework for a central pattern generator
(CPG)-based biped locomotion controller using a policy gradient method. Our goals
in this study are to achieve CPG-based biped walking with a 3D hardware humanoid
and to develop an efficient learning algorithm with CPG by reducing the
dimensionality of the state space used for learning. We demonstrate that an
appropriate feedback controller can be acquired within a few thousand trials by
numerical simulations and the controller obtained in numerical simulation achieves
stable walking with a physical robot in the real world. Numerical simulations and
hardware experiments evaluate the walking velocity and stability. The results
suggest that the learning algorithm is capable of adapting to environmental
changes. Furthermore, we present an online learning scheme with an initial policy
for a hardware robot to improve the controller within 200 iterations.
C1 [Endo, Gen] Tokyo Inst Technol, Meguro Ku, Tokyo 1528550, Japan.
[Morimoto, Jun; Nakanishi, Jun; Cheng, Gordon] ATR Computat Neurosci Labs, Japan
Sci & Technol Agncy, Computat Brain Project, ICORP, Kyoto 61902, Japan.
[Matsubara, Takamitsu] ATR Computat Neurosci Labs, Kyoto 6190288, Japan.
[Matsubara, Takamitsu] Nara Inst Sci & Technol, Nara 6300192, Japan.
RP Endo, G (reprint author), Tokyo Inst Technol, Meguro Ku, 2-12-1 Ookayama, Tokyo
1528550, Japan.
EM gendo@sms.titech.ac.jp; xmorimo@atr.jp; takam-m@atr.jp; jun@atr.jp;
gordon@atr.jp
OI Cheng, Gordon/0000-0003-0770-8717
CR Aoi S, 2005, AUTON ROBOT, V19, P219, DOI 10.1007/s10514-005-4051-1
Baxter J, 2001, J ARTIF INTELL RES, V15, P319
Benbrahim H, 1997, ROBOT AUTON SYST, V22, P283, DOI 10.1016/S0921-8890(97)00043-
2
Cohen AH, 1999, AUTON ROBOT, V7, P239, DOI 10.1023/A:1008920420634
COHEN AH, 2003, P 2 INT S AD MOT AN
Doya K, 2000, NEURAL COMPUT, V12, P219, DOI 10.1162/089976600300015961
Endo G, 2004, IEEE INT CONF ROBOT, P3036, DOI 10.1109/ROBOT.2004.1307523
Endo G., 2005, P 2005 IEEE INT C RO, P598
ENDO G, 2005, P 20 NAT C ART INT 9, P1267
GRILLNER S, 1995, TRENDS NEUROSCI, V18, P270, DOI 10.1016/0166-2236(95)93914-J
Hase K, 1998, ANTHROPOL SCI, V106, P327, DOI 10.1537/ase.106.327
HASE K, 1997, P 6 INT S COMP SIM B, P9
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hirukawa H, 2004, ROBOT AUTON SYST, V48, P165, DOI 10.1016/j.robot.2004.07.007
Ishiguro A, 2003, ADAPT BEHAV, V11, P7, DOI 10.1177/10597123030111001
Kimura H., 1998, P 15 INT C MACH LEAR, P278
Kimura H, 2007, PHILOS T R SOC A, V365, P153, DOI 10.1098/rsta.2006.1919
Konda VR, 2003, SIAM J CONTROL OPTIM, V42, P1143, DOI 10.1137/S0363012901385691
Kuroki Y., 2001, P IEEE RAS INT C HUM, P181
Matsubara T, 2006, ROBOT AUTON SYST, V54, P911, DOI 10.1016/j.robot.2006.05.012
MATSUOKA K, 1985, BIOL CYBERN, V52, P367, DOI 10.1007/BF00449593
MCGEER T, 1990, INT J ROBOT RES, V9, P62, DOI 10.1177/027836499000900206
MCMOHAN TA, 1984, MUSCLES REFLEXES LOC
Miyakoshi S, 1998, 1998 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS - PROCEEDINGS, VOLS 1-3, P84, DOI 10.1109/IROS.1998.724601
Mori T, 2004, PROCEEDING OF THE NINETEENTH NATIONAL CONFERENCE ON ARTIFICIAL
INTELLIGENCE AND THE SIXTEENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL
INTELLIGENCE, P623
MORIMOTO J, 2005, P IEEE INT C ROB AUT, P2381
Nishiwaki K, 2000, 2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS, P1559, DOI 10.1109/IROS.2000.895195
Orlovsky G, 1999, NEURONAL CONTROL LOC
Park Ill-Woo, 2005, P IEEE RAS INT C HUM, P321
PETERS J, 2003, P 3 IEEE INT C HUM R
Sutton RS, 2000, ADV NEUR IN, V12, P1057
TAGA G, 1995, BIOL CYBERN, V73, P97, DOI 10.1007/BF00204048
Tedrake R., 2004, P IEEE INT C INT ROB, P2849
Williamson MM, 1998, NEURAL NETWORKS, V11, P1379, DOI 10.1016/S0893-
6080(98)00048-3
NR 34
TC 86
Z9 88
U1 3
U2 24
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD FEB
PY 2008
VL 27
IS 2
BP 213
EP 228
DI 10.1177/0278364907084980
PG 16
WC Robotics
SC Robotics
GA 264VI
UT WOS:000253318100005
DA 2018-01-22
ER

PT J
AU Cheng, G
AF Cheng, Gordon
TI Humanoid technologies: "Know-how"
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Editorial Material
ID NEUROSCIENCE; PLATFORM
C1 [Cheng, Gordon] Natl Inst Informat & Commun Technol, ATR Computat Neurosci Labs,
Dept Humanoid Robot & Computat Neurosci, Kyoto, Japan.
[Cheng, Gordon] Japan Sci & Technol Agcy, ICORP Computat Brain Project, Saitama,
Japan.
RP Cheng, G (reprint author), Natl Inst Informat & Commun Technol, ATR Computat
Neurosci Labs, Dept Humanoid Robot & Computat Neurosci, Kyoto, Japan.
EM gordon@atr.jp
OI Cheng, Gordon/0000-0003-0770-8717; Cannata, Giorgio/0000-0001-7932-5411
CR Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Hashimoto S., 1998, IARP 1 INT WORKSH HU, P1
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hirukawa H, 2004, ROBOT AUTON SYST, V48, P165, DOI 10.1016/j.robot.2004.07.007
METTA G, 2000, AAAI FALL S CAP COD
Nagakubo A, 2003, ADV ROBOTICS, V17, P149, DOI 10.1163/156855303321165105
NISHIWAKI K, 2000, P IEEE RSJ INT C INT, P1599
Park I. W., 2005, INT J HUM ROBOT, V2, P519
RODNEY A, 1998, IARP 1 INT WORKSH HU, P1
Tsagarakis NG, 2007, ADV ROBOTICS, V21, P1151, DOI 10.1163/156855307781389419
NR 10
TC 11
Z9 11
U1 0
U2 1
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD JAN 31
PY 2008
VL 56
IS 1
BP 1
EP 3
DI 10.1016/j.robot.2007.09.008
PG 3
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 259KN
UT WOS:000252938300001
DA 2018-01-22
ER

PT J
AU Hosoda, K
Takuma, T
Nakamoto, A
Hayashi, S
AF Hosoda, Koh
Takuma, Takashi
Nakamoto, Atsushi
Hayashi, Shinji
TI Biped robot design powered by antagonistic pneumatic actuators for
multi-modal locomotion
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article; Proceedings Paper
CT 6th IEEE/RAS International Conference on Humanoid Robots
CY DEC 04-06, 2006
CL Genoa, ITALY
SP IEEE, RAS
DE biped locomotion; humanoid robots; multimodal locomotion; pneumatic
actuators
ID PASSIVE-DYNAMIC WALKING; WALKERS; MUSCLES
AB An antagonistic muscle mechanism that regulates joint compliance contributes
enormously to human dynamic locomotion. Antagonism is considered to be the key for
realizing more than one locomotion mode. In this paper, we demonstrate how
antagonistic pneumatic actuators can be utilized to achieve three dynamic
locomotion modes (walking, jumping, and running) in a biped robot. Firstly, we
discuss the contribution of joint compliance to dynamic locomotion, which
highlights the importance of tunable compliance. Secondly, we introduce the design
of a biped robot powered by antagonistic pneumatic actuators. Lastly, we apply
simple feedforward controllers for realizing walking, jumping, and running and
confirm the contribution of joint compliance to such multimodal dynamic locomotion.
Based on the results, we can conclude that the antagonistic pneumatic actuators are
superior candidates for constructing a human-like dynamic locomotor. (c) 2007
Elsevier B.V. All rights reserved.
C1 [Hosoda, Koh; Takuma, Takashi; Nakamoto, Atsushi; Hayashi, Shinji] Osaka Univ,
Grad Sch Engn, Dept Adapt Machine Syst, Suita, Osaka 565, Japan.
RP Hosoda, K (reprint author), Yamadaoka 2-1, Suita, Osaka 5650871, Japan.
EM hosoda@ams.eng.osaka-u.ac.jp; takuma@ams.eng.osaka-u.ac.jp;
shinji.hayashi@ams.eng.osaka-u.ac.jp
CR Ahmadi M, 1997, IEEE T ROBOTIC AUTOM, V13, P96, DOI 10.1109/70.554350
Asano F, 2001, IEEE T SYST MAN CY A, V31, P737, DOI 10.1109/3468.983431
Brown IE, 2000, BIOMECHANICS AND NEURAL CONTROL OF POSTURE AND MOVEMENT, P148
Caldwell DG, 1998, IEEE INT CONF ROBOT, P3053, DOI 10.1109/ROBOT.1998.680894
Caldwell DG, 1997, IEEE INT CONF ROBOT, P799, DOI 10.1109/ROBOT.1997.620132
Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Collins SH, 2001, INT J ROBOT RES, V20, P607, DOI 10.1177/02783640122067561
Collins S., 2005, P 2005 IEEE INT C RO, P1995
FULL RJ, 2000, ROBOTICS RES, V9, P337
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
HURST JW, 2007, IEEE C ROB AUT
KAJITA S, 2005, P IEEE INT C ROB AUT, P618
Kuo AD, 2002, J BIOMECH ENG-T ASME, V124, P113, DOI 10.1115/1.1427703
MCGEER T, 1990, INT J ROBOT RES, V9, P62, DOI 10.1177/027836499000900206
NAGASAKA K, 2001, P 2004 IEEE INT C RO, P3189
Ogura Y, 2005, P 2005 IEEE INT C RO, P605
Ono K, 2001, INT J ROBOT RES, V20, P953, DOI 10.1177/02783640122068218
Raibert M.H., 1986, LEGGED ROBOTS BALANC
Seyfarth A, 2002, J BIOMECH, V35, P649, DOI 10.1016/S0021-9290(01)00245-7
Sugimoto Y., 2002, P 5 INT C CLIMB WALK, P123
Takuma T, 2006, INT J ROBOT RES, V25, P861, DOI 10.1177/0278364906069187
TEDRAKE R, 2004, P IEEE INT C ROB AUT, P4656
Umedachi T., 2007, J ROBOTICS MECHATRON, V19, P27
van der Linde RQ, 1999, IEEE T ROBOTIC AUTOM, V15, P599, DOI 10.1109/70.781963
van der Linde RQ, 1999, BIOL CYBERN, V81, P227, DOI 10.1007/s004220050558
Verrelst B, 2005, AUTON ROBOT, V18, P201, DOI 10.1007/s10514-005-0726-x
VUKOBRATOVIC M, 1970, IEEE T BIO-MED ENG, VBM17, P25, DOI
10.1109/TBME.1970.4502681
Wisse M, 2005, IEEE T ROBOT, V21, P393, DOI 10.1109/TRO.2004.838030
Wisse M., 2003, P INT S AD MOT AN MA
Wisse M., 2004, IEEE RAS RSJ INT C H
NR 30
TC 67
Z9 69
U1 2
U2 26
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD JAN 31
PY 2008
VL 56
IS 1
BP 46
EP 53
DI 10.1016/j.robot.2007.09.010
PG 8
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 259KN
UT WOS:000252938300005
DA 2018-01-22
ER

PT J
AU Sugihara, T
Yamamoto, K
Nakamura, Y
AF Sugihara, Tomomichi
Yamamoto, Kou
Nakamura, Yoshihiko
TI Hardware design of high performance miniature anthropomorphic robots
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article; Proceedings Paper
CT 6th IEEE/RAS International Conference on Humanoid Robots
CY DEC 04-06, 2006
CL Genoa, ITALY
SP IEEE, RAS
DE miniature anthropomorphic robot; mechanical design; system design
AB Technical issues in designing reliable high-performance miniature humanoid
robots are discussed. Although the light-weight and small-size body of such a
humanoid robot facilitates safer and smoother experiments of agile motions
involving large accelerations and impacts, building a complex humanoid system in a
small body is still a challenging problem. Simultaneous requirements for wider
motion ranges, higher rigidity, less fragile electric devices and circuits, and
less sensitive electric wiring should be satisfied in a limited space. In order to
meet them by overcoming the difficulties, we propose to mechanically modularize
joint structures, and to equip a centralized electric control unit involving PC
boards, sensing devices, signal-processors, communication boards, and power
amplifiers. The developments of the mechanical modules and the centralized unit are
made for two miniature humanoid robots. (c) 2007 Elsevier B.V. All rights reserved.
C1 [Sugihara, Tomomichi] Kyushu Univ, Sch Informat Sci & Elect Engn, Nishi Ku,
Fukuoka 8190395, Japan.
[Yamamoto, Kou; Nakamura, Yoshihiko] Univ Tokyo, Sch Informat Sci & Technol,
Bunkyo Ku, Tokyo 1138656, Japan.
RP Sugihara, T (reprint author), Kyushu Univ, Sch Informat Sci & Elect Engn, Nishi
Ku, 744 Motooka, Fukuoka 8190395, Japan.
EM zhidao@ieee.org
RI Nakamura, Yoshihiko/M-1019-2014
OI Nakamura, Yoshihiko/0000-0001-7162-5102; Yamamoto,
Ko/0000-0002-9558-3880
CR BALTES J, 2005, 7 ROBOCUP COMP C
Buschmann T, 2006, IEEE INT CONF ROBOT, P2673
Fujimoto Y, 1998, IEEE ROBOT AUTOM MAG, V5, P33, DOI 10.1109/100.692339
FURUTA T, 2000, 1 IEEE RAS INT C HUM
Gienger M., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P3334, DOI 10.1109/ROBOT.2000.845224
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
HWANG Y, 2003, P 2003 IEEE INT C RO, P31
Inaba M, 1998, ADV ROBOTICS, V12, P1
INABA M, 1995, IROS '95 - 1995 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT
ROBOTS AND SYSTEMS: HUMAN ROBOT INTERACTION AND COOPERATIVE ROBOTS, PROCEEDINGS,
VOL 3, P297, DOI 10.1109/IROS.1995.525899
Inaba M., 1993, P 6 INT S ROB RES, P335
KAGAMI S, 2001, LECT NOTES CONTROL I, P41
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
Kim JY, 2005, P IEEE INT C ROB AUT, P1443
KUROKI Y, 2003, P 2003 IEEE INT C RO, P471
Nagasaka K, 1997, IEEE INT CONF ROBOT, P2944, DOI 10.1109/ROBOT.1997.606734
NORDIN P, 1999, P 4 INT S ART LIF RO
Shirata S., 2004, P 2004 IEEE RSJ INT, P148
Sugihara T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1404, DOI 10.1109/ROBOT.2002.1014740
Sugihara T., 2005, P 2005 IEEE INT C RO, P306
Tawara T, 2004, INT J ROBOT RES, V23, P1097, DOI 10.1177/0278364904047395
VITOR MF, 2005, P 2005 IEEE RAS INT, P86
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
WAITA H, 2001, P 19 ANN C ROB SOC J, P787
Yamaguchi J, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P368, DOI 10.1109/ROBOT.1999.770006
YAMASAKI F, 2000, 1 IEEE RAS INT C HUM
Yamasaki K, 2006, PRACTICAL FRUITS ECO, P43, DOI 10.1007/4-431-28915-1_7
NR 27
TC 14
Z9 14
U1 0
U2 3
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD JAN 31
PY 2008
VL 56
IS 1
BP 82
EP 94
DI 10.1016/j.robot.2007.09.012
PG 13
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 259KN
UT WOS:000252938300008
DA 2018-01-22
ER

PT J
AU Lim, HO
Ogura, Y
Takanishi, A
AF Lim, Hun-ok
Ogura, Y.
Takanishi, Atsuo
TI Locomotion pattern generation and mechanisms of a new biped walking
machine
SO PROCEEDINGS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING
SCIENCES
LA English
DT Article
DE biped walking machine; humanoid robot; knee-stretched motion; locomotion
pattern; compensatory motion
ID IMPEDANCE CONTROL; ROBOT; POINT
AB This paper describes the mechanism of a 16 d.f. biped walking machine, Waseda
biped humanoid robot-2 lower limb (WABIAN- 2LL), which has two 7 d.f. legs and a 2
d.f. waist actuated by DC servo motors with reduction gears. WABIAN-2LL is designed
with large movable angle ranges like those of a human. Its height and weight are
1200 mm and 40 kg, respectively. It is able to walk with its knees stretched using
the redundancy of the legs and to move around an object using a hip-bending motion
without touching the object. A knee-stretched locomotion pattern generation is also
proposed in this paper, which separately creates joint angles in a supporting and a
swinging phase. During knee-stretched walking, the joint rate of the knee will
approach infinitely when the knee is stretched. This singularity problem is solved
by using the motion of the waist, not the posture of the trunk. The effectiveness
of the mechanisms and pattern generations of WABIAN-2LL is verified through dynamic
walking experiments.
C1 [Lim, Hun-ok] Kanagawa Univ, Fac Engn, Dept Mech Engn, Yokohama, Kanagawa
2218686, Japan.
[Ogura, Y.; Takanishi, Atsuo] Waseda Univ, Humanoid Robot Inst, Shinjuku Ku,
Tokyo 1698555, Japan.
RP Lim, HO (reprint author), Kanagawa Univ, Fac Engn, Dept Mech Engn, 3-27-1
Rokkakubashi, Yokohama, Kanagawa 2218686, Japan.
EM holim@kanagawa-u.ac.jp
CR Fujiwara K., 2003, Proceedings 2003 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), P1920
FURUTA T, 2000, CD ROM P IEEE RAS IN
Goswami A, 1999, INT J ROBOT RES, V18, P523, DOI 10.1177/02783649922066376
HEMAMI H, 1979, IEEE T AUTOMAT CONTR, V24, P526, DOI 10.1109/TAC.1979.1102105
HIROSE M, 2001, P ADV SCI I 2001, P1
ISHIDA T, 2002, P 3 IARP INT WORKSH, P116
KAGAMI S, 2001, P 2001 IEEE RAS INT, P253
Kaneko K, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P38, DOI 10.1109/ROBOT.2002.1013336
Kato I., 1973, P CISM IFTOMM S THEO, P12
Kim JY, 2005, P IEEE INT C ROB AUT, P1443
LIM H, 2002, P IEEE INT C ROB AUT, P3111
Lim HO, 2004, ADV ROBOTICS, V18, P415, DOI 10.1163/156855304773822491
Lim HO, 2004, ROBOTICA, V22, P577, DOI 10.1017/S0263574704000372
LOFFLER K, 2003, P IEEE INT C ROB AUT, P484
OH SY, 1984, J ROBOTIC SYST, V1, P235, DOI 10.1002/rob.4620010303
PARK J, 2001, IEEE T ROBOTIC AUTOM, V17, P882, DOI DOI 10.1109/70.976014
Park JH, 2003, FUZZY SET SYST, V134, P189, DOI 10.1016/S0165-0114(02)00237-3
Park JH, 2001, IEEE T ROBOTIC AUTOM, V17, P870, DOI 10.1109/70.976014
Popovic MB, 2005, INT J ROBOT RES, V24, P1013, DOI 10.1177/0278364905058363
Press WH, 1998, NUMERICAL RECIPES C
Raibert M.H., 1986, LEGGED ROBOTS BALANC
Setiawan S, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P361, DOI 10.1109/ROBOT.1999.770005
Takanishi A., 1990, Proceedings. IROS '90. IEEE International Workshop on
Intelligent Robots and Systems '90. Towards a New Frontier of Applications (Cat.
No.90TH0332-7), P323, DOI 10.1109/IROS.1990.262408
Takanishi A., 1990, Proceedings. IROS '90. IEEE International Workshop on
Intelligent Robots and Systems '90. Towards a New Frontier of Applications (Cat.
No.90TH0332-7), P795, DOI 10.1109/IROS.1990.262498
TAKANISHI A, 1989, P INT C ADV ROB JUN, P299
WILSON P, 1963, HUMAN LIMBS THEIR SU
YAMAGUCHI J, 1999, P IEEE INT C ROB AUT
Yamane K., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P688, DOI 10.1109/ROBOT.2000.844132
YOSHIDA K, 1991, PRINCIPLE STRUCTURAL
NR 29
TC 2
Z9 6
U1 0
U2 5
PU ROYAL SOC
PI LONDON
PA 6-9 CARLTON HOUSE TERRACE, LONDON SW1Y 5AG, ENGLAND
SN 1364-5021
EI 1471-2946
J9 P ROY SOC A-MATH PHY
JI Proc. R. Soc. A-Math. Phys. Eng. Sci.
PD JAN 8
PY 2008
VL 464
IS 2089
BP 273
EP 288
DI 10.1098/rspa.2007.1908
PG 16
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA 242KN
UT WOS:000251724100015
OA gold
DA 2018-01-22
ER

PT J
AU Nishimoto, R
Namikawa, J
Tani, J
AF Nishimoto, Ryunosuke
Namikawa, Jun
Tani, Jun
TI Learning multiple goal-directed actions through self-organization of a
dynamic neural network model: A humanoid robot experiment
SO ADAPTIVE BEHAVIOR
LA English
DT Article
DE learning; actions; initial sensitivity; continuous-time recurrent neural
network; self -organization; humanoid
ID SYSTEMS PERSPECTIVE; MIRROR SYSTEM; MOBILE ROBOT; NAVIGATION; BEHAVIOR;
TIME
AB We introduce a model that accounts for cognitive mechanisms of learning and
generating multiple goal-directed actions. The model employs the novel idea of the
so-called "sensory forward model," which is assumed to function in inferior
parietal cortex for the generation of skilled behaviors in humans and monkeys. A
set of different goal-directed actions can be generated by the sensory forward
model by utilizing the initial sensitivity characteristics of its acquired forward
dynamics. The analyses on our robotics experiments show qualitatively how
generalization in learning can be achieved for situational variances, and how the
top-down intention toward a specific goal state can reconcile with the bottom-up
sensation from reality.
C1 [Nishimoto, Ryunosuke; Namikawa, Jun; Tani, Jun] RIKEN, Brain Sci Inst, Wako,
Saitama 3510198, Japan.
RP Tani, J (reprint author), RIKEN, Brain Sci Inst, 2-1 Hirosawa, Wako, Saitama
3510198, Japan.
EM tani@brain.riken.go.jp
CR ARBIB MA, 1981, HDB PHYSL NERVOUS SY, V2, P1448
BEER RD, 1995, ARTIF INTELL, V72, P173, DOI 10.1016/0004-3702(94)00005-L
Bianco R, 2004, ADAPT BEHAV, V12, P37, DOI 10.1177/105971230401200102
DEMIRIS J, 2002, IMITATION ANIMALS AR, P321
DOYA K, 1989, P 1989 INT JOINT C N, V1, P27
ELMAN JL, 1990, COGNITIVE SCI, V14, P179, DOI 10.1016/0364-0213(90)90002-E
Fogassi L, 2005, SCIENCE, V308, P662, DOI 10.1126/science.1106138
HEILMAN KM, 1973, BRAIN, V96, P861, DOI 10.1093/brain/96.4.861
Ito M, 2004, ADAPT BEHAV, V12, P93, DOI 10.1177/105971230401200202
Jordan M. I., 1986, P 8 ANN C COGN SCI S, P531
KAWATO M, 1987, BIOL CYBERN, V57, P169, DOI 10.1007/BF00364149
KHATIB O, 1986, INT J ROBOT RES, V5, P90, DOI 10.1177/027836498600500106
Liepmann H, 1920, ERGEBNISSE GESAMTEN, P516
Luria A. R., 1973, WORKING BRAIN
Nishimoto R, 2004, NEURAL NETWORKS, V17, P925, DOI 10.1016/j.neunet.2004.02.003
Oztop E, 2005, COGNITIVE BRAIN RES, V22, P129, DOI
10.1016/j.cogbrainres.2004.08.004
Oztop E, 2002, BIOL CYBERN, V87, P116, DOI 10.1007/s00422-002-0318-1
Paine RW, 2005, ADAPT BEHAV, V13, P211, DOI 10.1177/105971230501300303
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
Rumelhart D.E., 1986, PARALLEL DISTRIBUTED, V1, P318
Tani J, 1996, IEEE T SYST MAN CY B, V26, P421, DOI 10.1109/3477.499793
TANI J, 1994, NEURAL NETWORKS, V7, P553, DOI 10.1016/0893-6080(94)90112-0
Tani J, 2004, NEURAL NETWORKS, V17, P1273, DOI 10.1016/j.neunet.2004.05.007
Tani J, 1999, NEURAL NETWORKS, V12, P1131, DOI 10.1016/S0893-6080(99)00060-X
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
Tani J, 2003, NEURAL NETWORKS, V16, P11, DOI 10.1016/S0893-6080(02)00214-9
Wiggins S., 1990, INTRO APPL NONLINEAR
Williams RJ, 1989, NEURAL COMPUT, V1, P270, DOI 10.1162/neco.1989.1.2.270
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
Yamamoto T, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2467, DOI 10.1109/IRDS.2002.1041639
NR 30
TC 18
Z9 18
U1 0
U2 2
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 1059-7123
EI 1741-2633
J9 ADAPT BEHAV
JI Adapt. Behav.
PY 2008
VL 16
IS 2-3
BP 166
EP 181
DI 10.1177/1059712308089185
PG 16
WC Computer Science, Artificial Intelligence; Psychology, Experimental;
Social Sciences, Interdisciplinary
SC Computer Science; Psychology; Social Sciences - Other Topics
GA 281TK
UT WOS:000254521600006
DA 2018-01-22
ER

PT J
AU Nishide, S
Ogata, T
Tani, J
Komatani, K
Okuno, HG
AF Nishide, Shun
Ogata, Tetsuya
Tani, Jun
Komatani, Kazunori
Okuno, Hiroshi G.
TI Predicting object dynamics from visual images through active sensing
experiences
SO ADVANCED ROBOTICS
LA English
DT Article
DE active sensing; neural networks; dynamics; humanoid robot; object
manipulation
ID ROBOT
AB Prediction of dynamic features is an important task for determining the
manipulation strategies of an object. This paper presents a technique for
predicting dynamics of objects relative to the robot's motion from visual images.
During the training phase, the authors use the recurrent neural network with
parametric bias (RNNPB) to self-organize the dynamics of objects manipulated by the
robot into the PB space. The acquired PB values, static images of objects and robot
motor values are input into a hierarchical neural network to link the images to
dynamic features (PB values). The neural network extracts prominent features that
each induce object dynamics. For prediction of the motion sequence of an unknown
object, the static image of the object and robot motor value are input into the
neural network to calculate the PB values. By inputting the PB values into the
closed loop RNNPB, the predicted movements of the object relative to the robot
motion are calculated recursively. Experiments were conducted with the humanoid
robot Robovie-IIs pushing objects at different heights. The results of the
experiment predicting the dynamics of target objects proved that the technique is
efficient for predicting the dynamics of the objects. (C) Koninklijke Brill NV,
Leiden and The Robotics Society of Japan, 2008.
C1 [Nishide, Shun; Ogata, Tetsuya; Komatani, Kazunori; Okuno, Hiroshi G.] Kyoto
Univ, Grad Sch Informat, Dept Intelligence Sci & Technol, Sakyo Ku, Kyoto 6068501,
Japan.
[Tani, Jun] RIKEN, Brain Sci Inst, Wako, Saitama 3510198, Japan.
RP Nishide, S (reprint author), Kyoto Univ, Grad Sch Informat, Dept Intelligence
Sci & Technol, Sakyo Ku, Kyoto 6068501, Japan.
EM nishide@kuis.kyoto-u.ac.jp
OI Ogata, Tetsuya/0000-0001-7015-0379; Okuno, Hiroshi/0000-0002-8704-4318
CR AGRE PE, 1993, COGNITIVE SCI, V17, P61, DOI 10.1207/s15516709cog1701_4
BAJCSY R, 1988, P IEEE, V76, P996
Gibson J. J., 1979, ECOLOGICAL APPROACH
Hawkins J., 2004, INTELLIGENCE
Ishiguro H, 2001, IND ROBOT, V28, P498, DOI 10.1108/01439910110410051
Jordan M., 1986, P 8 ANN C COGN SCI S, P513
Kohonen T., 1995, SPRINGER SERIES INFO, V30
MIYASHITA T, 2004, P IEEE RSJ INT C INT
NODA K, 2003, P IEEE INT C ROB AUT, P3565
OGATA T, 2005, P IEEE RSJ INT C INT, P160
Ogata T, 2007, IEEE INT CONF ROBOT, P2156, DOI 10.1109/ROBOT.2007.363640
Rizzolatti G, 2004, ANNU REV NEUROSCI, V27, P169, DOI
10.1146/annurev.neuro.27.070203.144230
Rumelhart D.E., 1986, PARALLEL DISTRIBUTED, V1, P318
TAKAMUKU S, 2005, P IEEE ICDL OS TALK
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
NR 15
TC 11
Z9 11
U1 0
U2 3
PU VSP BV
PI LEIDEN
PA BRILL ACADEMIC PUBLISHERS, PO BOX 9000, 2300 PA LEIDEN, NETHERLANDS
SN 0169-1864
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2008
VL 22
IS 5
BP 527
EP 546
DI 10.1163/156855308X294879
PG 20
WC Robotics
SC Robotics
GA 312VO
UT WOS:000256700700003
DA 2018-01-22
ER

PT J
AU Stasse, O
Verrelst, B
Wieber, PB
Vanderborght, B
Evrard, P
Kheddar, A
Yokoi, K
AF Stasse, Olivier
Verrelst, Bjorn
Wieber, Pierre-Brice
Vanderborght, Bram
Evrard, Paul
Kheddar, Abderrahmane
Yokoi, Kazuhito
TI Modular architecture for humanoid walking pattern prototyping and
experiments
SO ADVANCED ROBOTICS
LA English
DT Article
DE design pattern; walking pattern generator; humanoid robot
AB In this paper we describe the use of design patterns as a basis for creating
humanoid walking pattern generator software having a modular architecture. This
architecture enabled the rapid porting of several novel walking algorithms on a
full-size humanoid robot, HRP-2. The body of work currently available allows
extracting a general software architecture usable with inter-exchange between
simulations and real experiments. The proposed architecture with the associated
design patterns is described together with several applications: a pattern
generator for a HRP-2 with passive toe joints, a pattern for dynamically stepping
over large obstacles and a new quadratic problem (QP) formulation for the
generation of the reference zero-momentum point. Thanks to the versatility and the
modularity of the proposed framework, the QP method has been implemented and
experienced within 4 days only. (C) Koninklijke Brill NV, Leiden and The Robotics
Society of Japan, 2008.
C1 [Stasse, Olivier; Evrard, Paul; Kheddar, Abderrahmane] AIST, CNRS ST2I, JRL,
Tsukuba, Ibaraki 3058568, Japan.
[Yokoi, Kazuhito] AIST, JRL, Tsukuba, Ibaraki 3058568, Japan.
[Verrelst, Bjorn; Vanderborght, Bram] Vrije Univ Brussel, B-1050 Brussels,
Belgium.
[Wieber, Pierre-Brice] INRIA, F-38334 Grenoble, France.
RP Stasse, O (reprint author), AIST, CNRS ST2I, JRL, Cent 2,Umezono 1-1-1, Tsukuba,
Ibaraki 3058568, Japan.
EM olivier.stasse@aist.go.jp
RI Vanderborght, Bram/A-1599-2008; Yokoi, Kazuhito/K-2046-2012; Stasse,
Olivier/E-6220-2010
OI Vanderborght, Bram/0000-0003-4881-9341; Yokoi,
Kazuhito/0000-0003-3942-2027;
CR Alexander C., 1977, PATTERN LANGUAGE TOW
Ando N., 2006, P SICE ICASE INT JOI, P2633
BECK K, 1989, P C OBJ OR PROGR SYS, P1
Beck K., 1987, CR8743
BROTEN G, 2006, ADV ROBOTIC SYST, V3, P11
Brugali D, 2002, IEEE T ROBOTIC AUTOM, V18, P487, DOI 10.1109/TRA.2002.802939
Fleury S., 1997, P IEEE RSJ INT C INT, V2, P842
FLUCKIGER L, 2007, P INT WORKSH SOFTW D
Gamma E., 1994, DESIGN PATTERNS ELEM
Harada K, 2006, INT J HUM ROBOT, V3, P1, DOI 10.1142/S0219843606000643
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
Kajita S, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1644
KAJITA S, 1992, IEEE T ROBOTIC AUTOM, V8, P431, DOI 10.1109/70.149940
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Kajita S, 2007, IEEE INT CONF ROBOT, P3963, DOI 10.1109/ROBOT.2007.364087
Kanehiro F, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P24, DOI 10.1109/ROBOT.2002.1013334
KANEHIRO F, 2006, P HUM WORKSH HUM TEC
MEYER B, 2006, IEEE COMPUT, V39, P23
Morisawa M., 2006, P IEEE RAS INT C HUM, P581
NAGASAKA K, 2004, P IEEE INT C ROB AUT, V4, P3189
Neo ES, 2005, IEEE-ASME T MECH, V10, P546, DOI 10.1109/TMECH.2005.856112
NISHIWAKI K, 2006, P INT S EXP ROB RIO, P156
NISHIWAKI K, 2006, P INT C HUM ROB GEN, P542
Sakagami Y., 2002, P IEEE RSJ INT C INT, V3, p2478 , DOI DOI
10.1109/IRDS.2002.1041641
Schittkowski K., 2005, QL FORTRAN CODE CONV
SELLAOUTI R, 2006, P IEEE RSJ INT C INT, P4909
Stasse O., 2006, P IEEE RSJ INT C INT, P348
VERRELST B, 2006, P IEEE RAS INT C HUM, P117
Wieber P. B., 2006, P IEEE RAS INT C HUM, P137, DOI DOI 10.1109/ICHR.2006.321375
NR 29
TC 5
Z9 5
U1 0
U2 4
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2008
VL 22
IS 6-7
BP 589
EP 611
DI 10.1163/156855308X305236
PG 23
WC Robotics
SC Robotics
GA 331SP
UT WOS:000258033200002
DA 2018-01-22
ER

PT J
AU Hale, JG
Hohl, B
Hyon, SH
Matsubara, T
Moraud, EM
Cheng, G
AF Hale, Joshua G.
Hohl, Benjamin
Hyon, Sang-Ho
Matsubara, Takamitsu
Moraud, Eduardo Martin
Cheng, Gordon
TI Highly Precise Dynamic Simulation Environment for Humanoid Robots
SO ADVANCED ROBOTICS
LA English
DT Article
DE Humanoid robot; simulation; contact; friction; constraints
ID PLATFORM
AB In this paper we present a simulation environment for humanoid robots with it
precise and efficient method of handling, ground contact, and experiments
empirically validating the simulator. Highly accurate dynamic simulation is an
essential tool for research and development in humanoid robotics, and a simulator
should ideally provide a transparent interface with pathways for control and
sensing information identical to those of the actual robot(s) it models. We
identified ground contact its the chief source of divergence from reality in work
to date and have tackled this problem by developing an algorithm for
resolving,,round contact for humanoid robots. Our objective was to produce an
algorithm that is accurate, efficient and easy to implement. The algorithm is
general with respect to the complexity of the loot model; is based on empirically
measurable characteristics of the foot-ground interaction. i.e.. friction, which we
have obtained using experiments described; provides an exact implementation of the
Coulomb friction model (avoiding polyhedral approximation of the friction cone);
runs in real-time: is also amenable to it straightforward accuracy-speed trade-off;
and is relatively easy to implement as a constraint selection method. The
simulation environment embodies generality, and we have applied it to two different
humanoid robots, Hoap-2 and CB. We present experiments comparing the results of
simulation with identical motions performed by real robots, and comparing the full
contact resolution algorithm, the modification trading accuracy for computational
speed and a penalty-based method. (C) Koninklijke Brill NV, Leiden and The Robotics
Society of Japan, 2008
C1 [Hale, Joshua G.; Hyon, Sang-Ho; Cheng, Gordon] Japan Sci & Technol Agcy, ICORP
Computat Brain Project, Kawaguchi, Saitama 3510198, Japan.
[Hale, Joshua G.; Hohl, Benjamin; Hyon, Sang-Ho; Matsubara, Takamitsu; Moraud,
Eduardo Martin; Cheng, Gordon] ATR Computat Neurosci Labs, Kyoto 6190288, Japan.
[Hohl, Benjamin] Univ Karlsruhe, Inst Informat Technol, D-76131 Karlsruhe,
Germany.
[Matsubara, Takamitsu] Nara Inst Sci & Technol, Nara 6300101, Japan.
[Moraud, Eduardo Martin] Ecole Natl Super Mines, F-75006 Paris, France.
RP Hale, JG (reprint author), Japan Sci & Technol Agcy, ICORP Computat Brain
Project, 4-1-8 Honcho, Kawaguchi, Saitama 3510198, Japan.
EM jhale@atr.jp
OI Cheng, Gordon/0000-0003-0770-8717
CR BARAFF D, 1994, P SIGGRAPH 94, P23, DOI DOI 10.1145/192161.192168
Bertsekas D. P., 2003, CONVEX ANAL OPTIMIZA
Buschmann T, 2006, IEEE INT CONF ROBOT, P2673
Chatterjee A, 1999, NONLINEAR DYNAM, V20, P159, DOI 10.1023/A:1008397905242
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Featherstone R, 1987, ROBOT DYNAMICS ALGOR
Fill JA, 2000, SIAM J MATRIX ANAL A, V21, P629, DOI 10.1137/S0895479897329692
Guendelman E, 2003, ACM T GRAPHIC, V22, P871, DOI 10.1145/882262.882358
HALE JG, 2006, P SCA 2006 EUR ACM S, P27
Hollars M. G., 1994, SD FAST USERS MANUAL
Hyon S., 2006, P IEEE RSJ INT C INT, P4915
HYON S, 2006, P IEEE RAS INT C HUM, P214
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
KIMURA K, 2003, P 21 ANN C ROB SOC J, pIG29
LLOYD JE, 2005, P IEEE C ROBOTICS AU, P4549
Matsubara T, 2007, IEEE INT CONF ROBOT, P2688, DOI 10.1109/ROBOT.2007.363871
MAYER NM, 2006, ROBOCUP 2006, P22
MIRTICH B, 1998, TR9801 MERL
OROURKE J, 1994, COMPUTATIONAL GEOMET
Schmidl H, 2004, IEEE T VIS COMPUT GR, V10, P189, DOI 10.1109/TVCG.2004.1260770
Stewart D., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P162, DOI 10.1109/ROBOT.2000.844054
SUGIHARA T, 2002, P IEEE RSJ INT C INT, V3, P2575
Trinkle JC, 1997, Z ANGEW MATH MECH, V77, P267, DOI 10.1002/zamm.19970770411
Van de Panne M., 1993, Computer Graphics Proceedings, P335
Wilhelms J., 1988, Visual Computer, V4, P283, DOI 10.1007/BF01908875
Yamane K, 2003, IEEE T ROBOTIC AUTOM, V19, P421, DOI 10.1109/TRA.2003.810579
Yamane K., 1999, Proceedings 1999 IEEE International Conference on Robotics and
Automation (Cat. No.99CH36288C), P714, DOI 10.1109/ROBOT.1999.770059
Yamane K, 2006, IEEE INT CONF ROBOT, P1904, DOI 10.1109/ROBOT.2006.1641984
COULOMB FRICTION
[Anonymous], USER DATAGRAM PROTOC
NR 30
TC 3
Z9 4
U1 0
U2 5
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2008
VL 22
IS 10
SI SI
BP 1075
EP 1105
DI 10.1163/156855308X324776
PG 31
WC Robotics
SC Robotics
GA 398DI
UT WOS:000262712400004
DA 2018-01-22
ER

PT J
AU Narioka, K
Hosoda, K
AF Narioka, Kenichi
Hosoda, Koh
TI Designing Synergistic Walking of a Whole-Body Humanoid Driven by
Pneumatic Artificial Muscles: An Empirical Study
SO ADVANCED ROBOTICS
LA English
DT Article
DE Synergistic motion; three-dimensional biped walking; humanoid robot;
McKibben pneumatic artificial muscle; incremental design approach
ID PASSIVE-DYNAMIC WALKING
AB Our body consists of many body parts that are compliantly connected with each
other by muscles and ligaments, and their behavior emerges out of the synergy of
the whole-body dynamics. Such synergistic behavior generation is supposed to
contribute to human adaptive movement such as walking. This paper describes
designing synergistic walking of a whole-body humanoid robot whose joints are
driven by artificial pneumatic muscles antagonistically. We propose to take an
incremental design approach to deal with the complicated dynamics of the system. As
a result, we can determine control parameters that govern whole-body behavior. We
experimentally demonstrate that the humanoid walks stably with a simple limit-cycle
controller. (C) Koninklijke Brill NV. Leiden and The Robotics Society of Japan.
2008
C1 [Narioka, Kenichi; Hosoda, Koh] Osaka Univ, Fac Engn, JST Dept Adapt Machine
Syst, Asada Synergist Intelligence Project,ERATO, Osaka, Japan.
RP Narioka, K (reprint author), Osaka Univ, Fac Engn, JST Dept Adapt Machine Syst,
Asada Synergist Intelligence Project,ERATO, Osaka, Japan.
EM kenichi.narioka@ams.eng.osaka-u.ac.jp
CR Asano F, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P2637, DOI 10.1109/IRDS.2002.1041668
Bernstein N. A., 1996, DEXTERITY ITS DEV
Brown IE, 2000, BIOMECHANICS AND NEURAL CONTROL OF POSTURE AND MOVEMENT, P148
Caldwell DG, 1998, IEEE INT CONF ROBOT, P3053, DOI 10.1109/ROBOT.1998.680894
Caldwell DG, 1997, IEEE INT CONF ROBOT, P799, DOI 10.1109/ROBOT.1997.620132
Collins SH, 2001, INT J ROBOT RES, V20, P607, DOI 10.1177/02783640122067561
FULL RJ, 2000, ROBOTICS RES, V9, P337
Garcia-Arraras JE, 1998, J EXP ZOOL, V281, P288, DOI 10.1002/(SICI)1097-
010X(19980701)281:4<288::AID-JEZ5>3.0.CO;2-K
GOSWAMI S, 1996, P IEEE INT C ROB AUT, P246
Hase K, 1997, JSME INT J C-MECH SY, V40, P25, DOI 10.1299/jsmec1993.40.25
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
HOSODA K, 2005, P 3 INT S AD MOT AN
Hosoda K., 2006, P IEEE RAS INT C HUM, P284
Ishiguro A, 2006, ADAPTIVE MOTION OF ANIMALS AND MACHINES, P107, DOI 10.1007/4-
431-31381-8_10
Kajita S, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P31, DOI 10.1109/ROBOT.2002.1013335
MCGEER T, 1990, INT J ROBOT RES, V9, P62, DOI 10.1177/027836499000900206
MIZUUCHI I, 2005, P IEEE RAS INT C HUM, P339
Ogura Y, 2005, P 2005 IEEE INT C RO, P605
Pfeifer R., 1999, UNDERSTANDING INTELL
Sugimoto Y., 2002, P 5 INT C CLIMB WALK, P123
Takuma T, 2006, INT J ROBOT RES, V25, P861, DOI 10.1177/0278364906069187
TOMITA N, 2004, P 9 INT S ART LIF RO, P359
van der Linde RQ, 1999, IEEE T ROBOTIC AUTOM, V15, P599, DOI 10.1109/70.781963
Verrelst B, 2005, AUTON ROBOT, V18, P201, DOI 10.1007/s10514-005-0726-x
Wisse M., 2006, P IEEE RAS INT C HUM, P110
Wisse M., 2003, P 2 INT S AD MOT AN
WISSE M, 2004, P IEEE RAS RSJ INT C
NR 27
TC 19
Z9 20
U1 2
U2 11
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2008
VL 22
IS 10
SI SI
BP 1107
EP 1123
DI 10.1163/156855308X324811
PG 17
WC Robotics
SC Robotics
GA 398DI
UT WOS:000262712400005
DA 2018-01-22
ER

PT J
AU Matsubara, T
Morimoto, J
Nakanishi, J
Hyon, SH
Hale, JG
Cheng, G
AF Matsubara, Takamitsu
Morimoto, Jun
Nakanishi, Jun
Hyon, Sang-Ho
Hale, Joshua G.
Cheng, Gordon
TI Learning to Acquire Whole-Body Humanoid Center of Mass Movements to
Achieve Dynamic Tasks
SO ADVANCED ROBOTICS
LA English
DT Article
DE Reinforcement learning; humanoid robot; whole-body movement;
policy-gradient method
AB This paper presents a novel approach for acquiring dynamic whole-body movements
on humanoid robots focused on learning a control policy for the center of mass
(CoM). In our approach, we combine both a model-based CoM controller and a model-
free reinforcement learning (RL) method to acquire dynamic whole-body movements in
humanoid robots. (i) To cope with high dimensionality, We use it model-based CoM
controller as a basic controller that derives joint angular velocities from the
desired CoM velocity. The balancing issue can also be considered in the controller.
(ii) The RL method is used to acquire a controller that generates the desired CoM
velocity based on the current state. To demonstrate the effectiveness of our
approach, we apply it to a ball-punching task on a simulated humanoid robot model.
The acquired whole-body punching movement was also demonstrated on Fujitsu's Hoap-2
humanoid robot. (C) Koninklijke Brill NV, Leiden and The Robotics Society of Japan,
2008
C1 [Matsubara, Takamitsu] Nara Inst Sci & Technol, Nara 6300101, Japan.
[Matsubara, Takamitsu; Morimoto, Jun; Nakanishi, Jun; Hyon, Sang-Ho; Hale,
Joshua G.; Cheng, Gordon] ATR Computat Neurosci Labs, Kyoto 6190288, Japan.
[Morimoto, Jun; Nakanishi, Jun; Hyon, Sang-Ho; Hale, Joshua G.; Cheng, Gordon]
Japan Sci & Technol Agcy, ICORP Computat Brain Project, Kawaguchi, Saitama 3510198,
Japan.
RP Matsubara, T (reprint author), Nara Inst Sci & Technol, 8916-5 Takayama Cho,
Nara 6300101, Japan.
EM takam-m@is.naist.jp
OI Cheng, Gordon/0000-0003-0770-8717
CR Aberdeen D., 2003, THESIS AUSTR NATL U
Baxter J, 2001, J ARTIF INTELL RES, V15, P319
BAXTER J, 1999, DIRECT GRADIENT BASE, V1
Boulic R., 1994, P EUR WORKSH COMB RE
Doya K, 2000, NEURAL COMPUT, V12, P219, DOI 10.1162/089976600300015961
ENDO G, 2005, P 20 NAT C ART INT 9, P1267
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hyon S., 2006, P IEEE RSJ INT C INT, P4915
Kagami S, 2001, ALGORITHMIC AND COMPUTATIONAL ROBOTICS: NEW DIRECTIONS, P329
KAJITA S, 2003, YIEEE RSJ INT C INT, P1644
Kimura H, 2001, IEEE DECIS CONTR P, P411, DOI 10.1109/CDC.2001.980135
Kimura H., 1998, P 15 INT C MACH LEAR, P278
Kimura H., 1997, P 14 INT C MACH LEAR, P152
Kuroki Y., 2001, P IEEE RAS INT C HUM, P181
MATSUBARA T, 2005, P 2005 IEEE INT C RO, P4175
Morimoto J, 2001, ROBOT AUTON SYST, V36, P37, DOI 10.1016/S0921-8890(01)00113-0
Morimoto J, 2006, IEEE INT CONF ROBOT, P1579, DOI 10.1109/ROBOT.2006.1641932
NAGASAKA K, 2000, THESIS U TOKYO
Scholz JP, 1999, EXP BRAIN RES, V126, P289, DOI 10.1007/s002210050738
Sugihara T., 2002, Proceedings IEEE/RSJ International Conference on Intelligent
Robots and Systems (Cat. No.02CH37332C), P2575, DOI 10.1109/IRDS.2002.1041658
Sutton RS, 2000, ADV NEUR IN, V12, P1057
Sutton RS, 1998, REINFORCEMENT LEARNI
Tedrake R., 2004, P IEEE INT C INT ROB, P2849
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
WILLIAMS RJ, 1992, MACH LEARN, V8, P229, DOI 10.1007/BF00992696
Yoshikawa T., 1990, FDN ROBOTICS ANAL CO
NR 26
TC 4
Z9 4
U1 0
U2 5
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2008
VL 22
IS 10
SI SI
BP 1125
EP 1142
DI 10.1163/156855308X324785
PG 18
WC Robotics
SC Robotics
GA 398DI
UT WOS:000262712400006
DA 2018-01-22
ER

PT J
AU Morimoto, J
Endo, G
Cheng, G
AF Morimoto, Jun
Endo, Gen
Cheng, Gordon
TI Using a synchronization mechanism for humanoid locomotion
SO NEUROSCIENCE RESEARCH
LA English
DT Meeting Abstract
C1 [Morimoto, Jun; Cheng, Gordon] JST ICORP, Computat Brain Project, Saitama,
Japan.
[Morimoto, Jun; Cheng, Gordon] ATR, Computat Neurosci Labs, Kyoto, Japan.
[Endo, Gen] Tokyo Inst Technol, Dept Mech & Aerosp Engn, Tokyo 152, Japan.
NR 0
TC 0
Z9 0
U1 0
U2 0
PU ELSEVIER IRELAND LTD
PI CLARE
PA ELSEVIER HOUSE, BROOKVALE PLAZA, EAST PARK SHANNON, CO, CLARE, 00000,
IRELAND
SN 0168-0102
J9 NEUROSCI RES
JI Neurosci. Res.
PY 2008
VL 61
BP S184
EP S184
PG 1
WC Neurosciences
SC Neurosciences & Neurology
GA 381PJ
UT WOS:000261548101244
DA 2018-01-22
ER

PT J
AU Komiya, I
Torii, H
Fujii, Y
Hayashizaki, N
AF Komiya, Izumi
Torii, Hiroyuki
Fujii, Yasuhiko
Hayashizaki, Noriyosu
TI Relationship between students' interests in science and attitudes toward
nuclear power generation
SO PROGRESS IN NUCLEAR ENERGY
LA English
DT Article; Proceedings Paper
CT COE-INES 2nd International Symposium on Innovative Nuclear Energy
Systems for Sustainable Development of the World
CY NOV 26-30, 2006
CL Yokohama, JAPAN
SP COE-INES, Tokyo Inst Technol, Res Lab Nucl Reactors, Tokyo Inst Technol, Ctr Res
Innovat Nucl Energy Syst, Atom Energy Soc Japan
DE students' interests in science; attitudes toward nuclear power
generation
AB In order to study the following two points, we conducted an attitude survey
among senior high school students.
Study 1 The differences in attitudes between nuclear power generation and other
science and technologies.
Study 2 The relationship between students' interests in science and attitudes
toward nuclear power generation.
In the questionnaire, the attitudes toward nuclear power generation consisted of
four questions: (1) pros and cons, (2) safety, (3) necessity, (4) reliability of
scientists and engineers who are involved in nuclear power; and we treat four
science and technology issues: (1) genetically modified foods, (2) nuclear power
generation, (3) humanoid and pet robots, (4) crone technology.
From study 1, on attitude to security toward nuclear power generation, about 80%
of respondents answered negatively and on attitude to necessity toward it, about
75% of respondents answered positively. Therefore, we found that the structure of
attitude was complicated and that it was specific to nuclear power generation.
From study 2, we found students' interests in science that influence the
attitude toward nuclear power generation. (C) 2007 Published by Elsevier Ltd.
C1 [Komiya, Izumi; Torii, Hiroyuki; Fujii, Yasuhiko; Hayashizaki, Noriyosu] Tokyo
Inst Technol, Nucl Reactors Res Lab, Meguro Ku, Tokyo 1528550, Japan.
RP Komiya, I (reprint author), Tokyo Inst Technol, Nucl Reactors Res Lab, Meguro
Ku, Tokyo 1528550, Japan.
EM komiya@jst.go.jp
RI Hayashizaki, Noriyosu/C-3448-2015
OI Hayashizaki, Noriyosu/0000-0002-8245-7869
CR SHIMOOKA H, 1993, J ATOMIC ENERGY SOC, V35
SHINOZAKI K, 2005, JAPANESE J RISK ANAL, V15, P55
Tanaka Y, 1995, JAPANESE J EXPT SOCI, V35, P111
NR 3
TC 4
Z9 5
U1 1
U2 9
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 0149-1970
J9 PROG NUCL ENERG
JI Prog. Nucl. Energy
PY 2008
VL 50
IS 2-6
BP 719
EP 727
DI 10.1016/j.pnucene.2007.11.068
PG 9
WC Nuclear Science & Technology
SC Nuclear Science & Technology
GA 282WS
UT WOS:000254598500112
DA 2018-01-22
ER

PT J
AU Yokoya, R
Ogata, T
Tani, J
Komatani, K
Okuno, HG
AF Yokoya, Ryunosuke
Ogata, Tetsuya
Tani, Jun
Komatani, Kazunori
Okuno, Hiroshi G.
TI Experience-based imitation using RNNPB
SO ADVANCED ROBOTICS
LA English
DT Article
DE imitation; active sensing; humanoid robot; recurrent neural network
ID ROBOT
AB Robot imitation is a useful and promising alternative to robot programming.
Robot imitation involves two crucial issues. The first is how a robot can imitate a
human whose physical structure and properties differ greatly from its own. The
second is how the robot can generate various motions from finite programmable
patterns (generalization). This paper describes a novel approach to robot imitation
based on its own physical experiences. We considered the target task of moving an
object on a table. For imitation, we focused on an active sensing process in which
the robot acquires the relation between the object's motion and its own arm motion.
For generalization, we applied the RNNPB (recurrent neural network with parametric
bias) model to enable recognition/generation of imitation motions. The robot
associates the arm motion which reproduces the observed object's motion presented
by a human operator. Experimental results proved the generalization capability of
our method, which enables the robot to imitate not only motion it has experienced,
but also unknown motion through nonlinear combination of the experienced motions.
C1 Kyoto Univ, Grad Sch Informat, Sakyo Ku, Kyoto 6068501, Japan.
RIKEN, Brain Sci Inst, Wako, Saitama 35101, Japan.
RP Yokoya, R (reprint author), Kyoto Univ, Grad Sch Informat, Sakyo Ku, Kyoto
6068501, Japan.
EM yokoya@kuis.kyoto-u.ac.jp
OI Ogata, Tetsuya/0000-0001-7015-0379; Okuno, Hiroshi/0000-0002-8704-4318
CR Ishiguro H, 2001, IND ROBOT, V28, P498, DOI 10.1108/01439910110410051
Nakazawa A, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2539, DOI 10.1109/IRDS.2002.1041652
OGATA T, 2005, J JAPANESE SOC ARTIF, V20, P188
OGATA T, 2005, P IEEE RSJ INT C INT, P160
RAO R, 2005, IMITATION SOCIAL LEA, P217
Rizzolatti G, 2004, ANNU REV NEUROSCI, V27, P169, DOI
10.1146/annurev.neuro.27.070203.144230
Rumelhart D.E., 1986, PARALLEL DISTRIBUTED, V1, P318
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
TANI J, 2002, NEUROCOMPUTING, V101, P49
NR 10
TC 14
Z9 14
U1 0
U2 3
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PD DEC
PY 2007
VL 21
IS 12
BP 1351
EP 1367
DI 10.1163/156855307781746106
PG 17
WC Robotics
SC Robotics
GA 213JN
UT WOS:000249663200002
DA 2018-01-22
ER

PT J
AU Hyon, SH
Hale, JG
Cheng, G
AF Hyon, Sang-Ho
Hale, Joshua G.
Cheng, Gordon
TI Full-body compliant human-humanoid interaction: Balancing in the
presence of unknown external forces
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article; Proceedings Paper
CT 1st International Conference on Human-Robot Interaction
CY MAR 02-04, 2006
CL Salt Lake City, UT
SP IEEE Robot & Automat Soc
DE balance; biped robot; compliance; force control; full-body motion
control; human-humanoid interaction; passivity; redundancy
ID ROBOT
AB This paper proposes an effective framework of human-humanoid robot physical
interaction. Its key component is a new control technique for full-body balancing
in the presence of external forces, which is presented and then validated
empirically. We have adopted an integrated system approach to develop humanoid
robots. Herein, we describe the importance of replicating human-like capabilities
and responses during human-robot interaction in this context. Our balancing
controller provides gravity compensation, making the robot passive and thereby
facilitating safe physical interactions. The method operates by setting an
appropriate ground reaction force and transforming these forces into full-body
joint torques. It handles an arbitrary number of force in teraction points on the
robot. It does not require force measurement at interested contact points. It
requires neither inverse kinematics nor inverse dynamics. It can adapt to uneven
ground surfaces. It operates as a force control process, and can therefore,
accommodate simultaneous control processes using force-, velocity-, or position-
based control. Forces are distributed over supporting con tact points in an optimal
manner. Joint redundancy is resolved by damping injection in the context of
passivity. We present various force interaction experiments using our full-sized
bipedal humanoid platform, including compliant balance, even when affected by
unknown external forces, which demonstrates the effectiveness of the method.
C1 ICORP, JST, Computat Brain Project, Saitama 3320012, Japan.
ATR, Kyoto 6190288, Japan.
RP Hyon, SH (reprint author), ICORP, JST, Computat Brain Project, Saitama 3320012,
Japan.
EM sangho@atr.jp; jhale@atr.jp; gordon@atr.jp
OI Cheng, Gordon/0000-0003-0770-8717
CR Albu-Schaffer A, 2007, INT J ROBOT RES, V26, P23, DOI 10.1177/0278364907073776
Arimoto S, 2005, ADV ROBOTICS, V19, P401, DOI 10.1163/1568553053662555
ARIMOTO S, 2005, P IEEE INT C ROB AUT, P4500
Asada M, 2001, ROBOT AUTON SYST, V37, P185, DOI 10.1016/S0921-8890(01)00157-9
Breazeal C, 2002, DESIGNING SOCIABLE R
Cheng GD, 2001, ROBOT AUTON SYST, V37, P161, DOI 10.1016/S0921-8890(01)00156-7
Cheng G, 2007, ADV ROBOTICS, V21, P1097, DOI 10.1163/156855307781389356
Featherstone R, 1987, ROBOT DYNAMICS ALGOR
GREENWOOD DT, CLASSICAL DYNAMICS
Gregory RL, 1981, MIND SCI
Hale JG, 2005, IEEE T SYST MAN CY C, V35, P512, DOI 10.1109/TSMCC.2004.840063
HALE JG, 2006, P SCA 2006 EUR JACM
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hyon S., 2006, P IEEE RSJ INT C INT, P4915
HYON S, 2006, P IEEE RAS INT C HUM, P214
Hyon SH, 2007, IEEE INT CONF ROBOT, P2668, DOI 10.1109/ROBOT.2007.363868
Hyon SH, 2006, ADV ROBOTICS, V20, P93, DOI 10.1163/156855306775275521
ISHIGURO H, 2005, P 36 INT S ROB BOST
Kajita S, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1644
KATO I, 1987, P IEEE INT C ROB AUT, V4, P90
KAWATO M, UNPUB PHILOS T ROY S
KHATIB O, 1987, IEEE T ROBOTIC AUTOM, V3, P43, DOI 10.1109/JRA.1987.1087068
KUNIYOSHI Y, 1997, P IEEE RSJ INT C INT, V2, P811
Murray RM, 1994, MATH INTRO ROBOTIC M
Nagasaka K., 1999, P IEEE INT C SYST MA, V6, P908
Oztop E., 2005, INT J HUM ROBOT, V2, P537, DOI DOI 10.1142/S0219843605000582
Pratt J, 2001, INT J ROBOT RES, V20, P129, DOI 10.1177/02783640122067309
Sciavicco L., 1996, MODELLING CONTROL RO
Sentis L., 2005, INT J HUM ROBOT, V2, P505, DOI DOI 10.1142/S0219843605000594
Setiawan S, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P361, DOI 10.1109/ROBOT.1999.770005
SUGIHARA T, 2002, P IEEE RSJ INT C INT, V3, P2575
Takanishi A., 1990, Proceedings. IROS '90. IEEE International Workshop on
Intelligent Robots and Systems '90. Towards a New Frontier of Applications (Cat.
No.90TH0332-7), P795, DOI 10.1109/IROS.1990.262498
TAKEGAKI M, 1981, T ASME, V102, P119
VANDERSCHAFT AJ, 1999, GAIN PASSIVITY TECHI
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Yamaguchi J, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P368, DOI 10.1109/ROBOT.1999.770006
NR 36
TC 151
Z9 151
U1 3
U2 36
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD OCT
PY 2007
VL 23
IS 5
BP 884
EP 898
DI 10.1109/TRO.2007.904896
PG 15
WC Robotics
SC Robotics
GA 220SP
UT WOS:000250177900006
DA 2018-01-22
ER

PT J
AU Kanda, T
Sato, R
Saiwaki, N
Ishiguro, H
AF Kanda, Takayuki
Sato, Rumi
Saiwaki, Naoki
Ishiguro, Hiroshi
TI A two-month field trial in an elementary school for long-term
human-robot interaction
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article; Proceedings Paper
CT 1st International Conference on Human-Robot Interaction
CY MAR 02-04, 2006
CL Salt Lake City, UT
SP IEEE Robot & Automat Soc
DE field trial; friendship estimation; human-robot interaction; long-term
interaction; longitudinal stud
AB Interactive robots participating in our daily lives should have the fundamental
ability to socially communicate with humans. In this paper, we propose a mechanism
for two social communication abilities: forming long-term relationships and
estimating friendly relationships among people. The mechanism for long-term
relationships is based on three principles of behavior design. The robot we
developed, Robovie, is able to interact with children in the same way as children
do. Moreover, the mechanism is designed for long-term interaction. along the
following three design principles: 1) it calls children by name using radio
frequency identification tags; 2) it adapts its interactive behaviors for each
child based on a pseudo development mechanism; and 3) it confides its personal
matters to the children who have interacted with the robot for an extended period
of time. Regarding the estima- tion of friendly relationships, the robot assumes
that people who spontaneously behave as a group together are friends. Then, by
identifying each person in the interacting group around the robot, it estimates the
relationships between them. We conducted a two-month field trial at an elementary
school. An interactive humanoid robot, Robovie, was placed in a classroom at the
school. The results of the field trial revealed that the robot successfully
continued interacting with many children for two months, and seemed to have
established friendly relationships with them. In addition, it demonstrated
reasonable performance in identifying friendships among children. We believe that
these results demonstrate the potential of current interactive robots to establish
social relationships with humans in our daily lives.
C1 Adv Telecommun Res Inst Int, Intelligent Robot & Commun Lab, Kyoto 6190288,
Japan.
Nara Womens Univ, Nara 6308506, Japan.
RP Kanda, T (reprint author), Adv Telecommun Res Inst Int, Intelligent Robot &
Commun Lab, Kyoto 6190288, Japan.
EM kanda@atr.jp
RI Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825
CR Asher S.R., 1981, SOCIAL COMPETENCE, P125
Breazeal C., 1999, P 16 INT JOINT C ART, P1146
Burgard W., 1998, P NAT C ART INT, P11
COIE JD, 1983, CHILD DEV, V54, P1400, DOI 10.2307/1129803
Dautenhahn K., 2006, P 1 ACM SIGCHI SIGAR, P172
Fujita M, 2001, INT J ROBOT RES, V20, P781, DOI 10.1177/02783640122068092
Gockley R., 2006, P 1 ACM SIGCHI SIGAR, P168, DOI [10.1145/1121241.1121274, DOI
10.1145/1121241.1121274]
GOTTMAN JM, 1980, MINNESOTA S CHILD PS, V13, P197
Han J., 2005, P 14 IEEE INT WORKSH, P378, DOI DOI 10.1109/R0MAN.2005.1513808
Heider F, 1958, PSYCHOL INTERPERSONA
Ishiguro H, 2003, SPRINGER TRAC ADV RO, V6, P179
KANDA M, 1904, J COLL SCI IMP U TOK, V19, P1
Kanda T., 2003, P INT JOINT C ART IN, P177
KANDA T, 2003, P IEEE RSJ INT C INT, V2, P1657
KANDA T, 2004, P INT C IND ENG APPL, P402
Kanda T, 2006, INTERACT STUD, V7, P369, DOI 10.1075/is.7.3.12kan
Kozima H, 2001, ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, P377, DOI
10.1109/ROMAN.2001.981933
KOZIMA H, P 2005 IEEE INT WORK, P341
Ladd G.W., 1990, PEER REJECTION CHILD, P90
McConnell S., 1986, CHILDRENS SOCIAL BEH, P215
Nakadai K., 2001, P INT JOINT C ART IN, P1425
NEWCOMB AF, 1995, PSYCHOL BULL, V117, P306, DOI 10.1037/0033-2909.117.2.306
Rubin K. H., 1999, DEV PSYCHOL ADV TXB, P451
Scassellati B., 2000, INVESTIGATING MODELS
Shibata T, 2004, P IEEE, V92, P1749, DOI 10.1109/JPROC.2004.835383
Shiomi M., 2006, P 1 ACM SIGCHI SIGAR, P305, DOI DOI 10.1145/1121241.1121293
Siegwart R, 2003, ROBOT AUTON SYST, V42, P203, DOI 10.1016/S0921-8890(02)00376-7
Tanaka F., 2006, P 1 ACM SIGCHI SIGAR, P3, DOI DOI 10.1145/1121241.1121245
WALDROP MF, 1975, CHILD DEV, V46, P19, DOI 10.2307/1128829
NR 29
TC 73
Z9 73
U1 1
U2 5
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD OCT
PY 2007
VL 23
IS 5
BP 962
EP 971
DI 10.1109/TRO.2007.904904
PG 10
WC Robotics
SC Robotics
GA 220SP
UT WOS:000250177900013
DA 2018-01-22
ER

PT J
AU He, XY
Ogura, T
Satou, A
Hasegawa, O
AF He, Xiaoyuan
Ogura, Tomotaka
Satou, Akihiro
Hasegawa, Osamu
TI Developmental word acquisition and grammar learning by humanoid robots
through a self-organizing incremental neural network
SO IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS
LA English
DT Article
DE interactive learning; language; mental models; robot; word grounding
ID RECOGNITION; SPEECH
AB We present a new approach for online incremental word acquisition and grammar
learning by humanoid robots. Using no data set provided in advance, the proposed
system grounds language in a physical context, as mediated by its perceptual
capacities. It is carried out using show-and-tell procedures, interacting with its
human partner. Moreover, this procedure is open-ended for new words and multiword
utterances. These facilities are supported by a self-organizing incremental neural
network, which can execute online unsupervised classification and topology
learning. Embodied with a mental imagery, the system also learns by both top-down
and bottom-up processes, which are the syntactic structures that are contained in
utterances. Thereby, it performs simple grammar learning. Under such a multimodal
scheme, the robot is able to describe online a given physical context (both static
and dynamic) through natural language expressions. It can also perform actions
through verbal interactions with its human partner.
C1 NEC Corp Ltd, Tokyo 1088001, Japan.
Tokyo Inst Technol, Dept Computat Intelligence & Syst Sci, Yokohama, Kanagawa
2268503, Japan.
Tokyo Inst Technol, Imaging Sci & Engn Lab, Yokohama, Kanagawa 2268503, Japan.
RP He, XY (reprint author), NEC Corp Ltd, Tokyo 1088001, Japan.
EM gashoenhxy@gmail.com; ogutomo@herb.ocn.ne.jp; snowmoon@isl.titech.ac.jp;
oh@ieee.org
CR ELMAN JL, 1993, COGNITION, V48, P71, DOI 10.1016/0010-0277(93)90058-4
Gersho A., 1992, VECTOR QUANTIZATION
Hamad S, 1990, PHYSICA D, V42, P335, DOI DOI 10.1016/0167-2789(90)90087-6
He XY, 2007, IEEE T SYST MAN CY B, V37, P451, DOI 10.1109/TSMCB.2006.885309
IMAI S, 2002, SPEECH SIGNAL PROCES
Iwahashi N, 2003, INFORM SCIENCES, V156, P109, DOI [10.1016/S0020-0255(03)00167-
1, 10.1016/S0020-0255(03)00167-0]
KAMIYA Y, 2004, P JOINT 2 INT C SOFT
MYERS CS, 1981, AT&T TECH J, V60, P1389, DOI 10.1002/j.1538-7305.1981.tb00272.x
RABINER LR, 1989, P IEEE, V77, P257, DOI 10.1109/5.18626
Regier T., 1996, HUMAN SEMANTIC POTEN
Roy D, 2004, IEEE T SYST MAN CY B, V34, P1374, DOI 10.1109/TSMCB.2004.823327
Roy DK, 2002, COGNITIVE SCI, V26, P113, DOI 10.1016/S0364-0213(01)00061-1
Russell S., 2003, ARTIFICIAL INTELLIGE
Shen FR, 2006, NEURAL NETWORKS, V19, P90, DOI 10.1016/j.neunet.2005.04.006
Siskind JM, 1996, COGNITION, V61, P39, DOI 10.1016/S0010-0277(96)00728-7
Steels L, 2003, ROBOT AUTON SYST, V43, P163, DOI 10.1016/S0921-8890(02)00357-3
Steels L., 1997, P 4 EUR C ART LIF, P474
Steels L., 2002, TRANSITION LANGUAGE, P252
THRUN S, 1995, P INT JOINT C ART IN, P1217
Vogt P, 2005, ARTIF INTELL, V167, P206, DOI 10.1016/j.artint.2005.04.010
WACHSMUTH S, 2000, VIDERE J COMPUT VIS, V1, P61
Weng JY, 2001, SCIENCE, V291, P599, DOI 10.1126/science.291.5504.599
Yu C, 2004, PROCEEDING OF THE NINETEENTH NATIONAL CONFERENCE ON ARTIFICIAL
INTELLIGENCE AND THE SIXTEENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL
INTELLIGENCE, P488
2004, P 13 IEEE WORKSH ROB, P437
NR 24
TC 8
Z9 9
U1 0
U2 4
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855 USA
SN 1083-4419
J9 IEEE T SYST MAN CY B
JI IEEE Trans. Syst. Man Cybern. Part B-Cybern.
PD OCT
PY 2007
VL 37
IS 5
BP 1357
EP 1372
DI 10.1109/TSMCB.2007.903447
PG 16
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Computer Science, Cybernetics
SC Automation & Control Systems; Computer Science
GA 212KF
UT WOS:000249594500024
PM 17926715
DA 2018-01-22
ER

PT J
AU Hirukawa, H
Kanehiro, F
Kaneko, K
Kajita, S
Morisawa, M
AF Hirukawa, Hirohisa
Kanehiro, Fumio
Kaneko, Kenji
Kajita, Shuuji
Morisawa, Mitsuharu
TI Dinosaur robotics for entertainment applications
SO IEEE ROBOTICS & AUTOMATION MAGAZINE
LA English
DT Article
DE biped locomotion; humanoid robot; dinosaur robot; entertainment
C1 AIST, Intelligent Syst Inst, Humanoid Robot Grp, AIST Tsukuba Cent 1, Tsukuba,
Ibaraki 3058561, Japan.
Natl Inst Adv Ind Sci & Technol, Tokyo 1008921, Japan.
RP Hirukawa, H (reprint author), AIST, Intelligent Syst Inst, Humanoid Robot Grp,
AIST Tsukuba Cent 1, Tsukuba, Ibaraki 3058561, Japan.
EM hiro.hirukawa@aist.go.jp
RI Kajita, Shuuji/M-5010-2016; Hirukawa, Hirohisa/B-4209-2017; Kanehiro,
Fumio/L-8660-2016; Morisawa, Mitsuharu/M-3327-2016; KANEKO,
Kenji/M-5360-2016
OI Kajita, Shuuji/0000-0001-8188-2209; Hirukawa,
Hirohisa/0000-0001-5779-011X; Kanehiro, Fumio/0000-0002-0277-3467;
Morisawa, Mitsuharu/0000-0003-0056-4335; KANEKO,
Kenji/0000-0002-1888-8787
CR BONACCI N, 1988, AIRCRAFT SHEET METAL
*EXPO 2005, AICH
Hayase M., 1969, Transactions of the Society of Instrument and Control
Engineers, V5, P86
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
KANEKO K, 2006, P 2006 IEEE INT C IN, P5496
KATAYAMA T, 1985, INT J CONTROL, V41, P677, DOI 10.1080/0020718508961156
MORISAWA M, 2005, P IEEE RSJ INT C INT, P31
SHERIDAN TB, 1966, IEEE TRANS HUM FACT, VHFE7, P91, DOI 10.1109/THFE.1966.232329
TOMIZUKA M, 1979, ASME, V101, P172
MIT LEG LAB
INT ORG STAND
NR 13
TC 3
Z9 3
U1 0
U2 3
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1070-9932
EI 1558-223X
J9 IEEE ROBOT AUTOM MAG
JI IEEE Robot. Autom. Mag.
PD SEP
PY 2007
VL 14
IS 3
BP 43
EP 51
DI 10.1109/M-RA.2007.901318
PG 9
WC Automation & Control Systems; Robotics
SC Automation & Control Systems; Robotics
GA 217PF
UT WOS:000249959100008
DA 2018-01-22
ER

PT J
AU Takamukua, S
Arkinb, RC
AF Takamukua, Shinya
Arkinb, Ronald C.
TI Multi-method learning and assimilation
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE humanoid robot; multi-method learning; social learning
ID ROBOT
AB Considering the wide range of possible behaviours to be acquired for domestic
robots, applying a single learning method is clearly insufficient. In this paper,
we propose a new strategy for behaviour acquisition for domestic robots where the
behaviours are acquired using multiple differing learning methods that are
subsequently incorporated into a common behaviour selection system, enabling them
to be performed in appropriate situations. An example of the implementation of this
strategy applied to the entertainment humanoid robot QRIO. is introduced and the
results are discussed. (c) 2007 Elsevier B.V. All rights reserved.
C1 Osaka Univ, Grad Sch Engn, Dept Adapt Machine Syst, Suita, Osaka 5650871, Japan.
Georgia Inst Technol, Coll Comp, Mobile Robot Lab, Atlanta, GA 30332 USA.
Georgia Inst Technol, GVU Ctr, Atlanta, GA 30332 USA.
RP Takamukua, S (reprint author), Osaka Univ, Grad Sch Engn, Dept Adapt Machine
Syst, 2-1 Yamadaoka, Suita, Osaka 5650871, Japan.
EM takamuku@er.ams.eng.osaka-u.acjp; arkin@cc.gatech.edu
CR ARKIN RC, 2001, P ICRA 01, V1, P453
Arkin R. C, 1998, BEHAV BASED ROBOTICS
Caruana R, 1997, MACH LEARN, V28, P41, DOI 10.1023/A:1007379606734
Flavell J. H., 1963, DEV PSYCHOL J PIAGET
Fujita M, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P960
Fujita M, 1998, AUTON ROBOT, V5, P7, DOI 10.1023/A:1008856824126
FUJITA M, 1997, OPEN ARCHITECTURE RO, P435
GORDON D, 1993, INFORMATICA, V17, P331
HOSHINO Y, 2004, P IEEE INT C ROB AUT, V4, P4165
Ishiguro H, 2001, IND ROBOT, V28, P498, DOI 10.1108/01439910110410051
Kaplan F, 2004, LECT NOTES ARTIF INT, V3139, P259
Lindblom J., 2002, Proceedings of the Second International Workshop on
Epigenetic Robotics. Modeling Cognitive Development in Robotics Systems, P71
Metta G, 2003, ADAPT BEHAV, V11, P109, DOI 10.1177/10597123030112004
MICHALSKI R, 1993, MACHINE LEARNINF MUT, V4
Ogata T, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P162
OHNAKA S, 2001, IPSJ SIG NOTES, V37, P37
Peters J., 2003, 3 IEEE RAS INT C HUM
RAM A, 1993, INFORMATICA, V17, P347
SABE K, 2005, P IEEE INT C ROB AUT
SAWADA T, 2004, P IEEE INT C ROB AUT
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
SHIBATA T, 2002, J ROBOTICS MECHATRON, V14, P13
SMOLA A, 1998, NCTR98030 ROYAL HLLO
Sutton RS, 1998, REINFORCEMENT LEARNI
Tedrake R, 2005, P 14 YAL WORKSH AD L
Wolpert D. H., 1997, IEEE Transactions on Evolutionary Computation, V1, P67, DOI
10.1109/4235.585893
NR 26
TC 5
Z9 5
U1 0
U2 0
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD AUG 31
PY 2007
VL 55
IS 8
BP 618
EP 627
DI 10.1016/j.robot.2007.04.001
PG 10
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 204DW
UT WOS:000249024800002
DA 2018-01-22
ER

PT J
AU Hasanuzzamana, M
Zhanga, T
Ampornaramveth, V
Gotoda, H
Shirai, Y
Ueno, H
AF Hasanuzzamana, Md.
Zhanga, T.
Ampornaramveth, V.
Gotoda, H.
Shirai, Y.
Ueno, H.
TI Adaptive visual gesture recognition for human-robot interaction using a
knowledge-based software platform
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE adaptive visual gesture recognition; human-robot interaction;
multi-cluster based learning; SPAK
AB In human-human communication we can adapt or learn new gestures or new users
using intelligence and contextual information. Achieving natural gesture-based
interaction between humans and robots, the system should be adaptable to new users,
gestures and robot behaviors. This paper presents an adaptive visual gesture
recognition method for human-robot interaction using a knowledge-based software
platform. The system is capable of recognizing users, static gestures comprised of
the face and hand poses, and dynamic gestures of face in motion. The system learns
new users, poses using multi-cluster approach, and combines computer vision and
knowledge-based approaches in order to adapt to new users, gestures and robot
behaviors. In the proposed method, a frame-based knowledge model is defined for the
person-centric gesture interpretation and human-robot interaction. It is
implemented using the frame-based Software Platform for Agent and Knowledge
Management (SPAK). The effectiveness of this method has been demonstrated by an
experimental human-robot interaction system using a humanoid robot 'Robovie'. (c)
2007 Elsevier B.V. All rights reserved.
C1 Intelligent Syst Res Div, Dept Human & Comp Intelligence, Chiyoda Ku, Tokyo
1018430, Japan.
Ritsumeikan Univ, Sch Informat Sci & Engn, Dept Human & COmp Intelligence, Shiga
5258577, Japan.
RP Hasanuzzamana, M (reprint author), Intelligent Syst Res Div, Dept Human & Comp
Intelligence, Chiyoda Ku, 2-1-2 Hitotsubashi, Tokyo 1018430, Japan.
EM hzamancsdu@yahoo.com
CR AMPORNARAMVETH V, 2004, IEICE T INF SYST, V86, P1
ARYANANDA L, 2002, P IEEE RSJ INT C INT, V2, P1202
AXTELL RE, MEANING HAND GESTURE
Bhuiyan MA, 2004, INT J ROBOT AUTOM, V19, P42
Bhuiyan Md. Al- Amin, 2003, NII J, V5, P25
Fong T, 2003, ROBOT AUTON SYST, V42, P143, DOI 10.1016/S0921-8890(02)00372-X
Hasanuzzaman M, 2006, IND ROBOT, V33, P37, DOI 10.1108/01439910610638216
Hasanuzzaman M, 2004, LECT NOTES COMPUT SC, V3331, P369
Hasanuzzanum M., 2005, INFORM TECHNOLOGY J, V4, P496
Hasanuzzaman M., 2004, P IEEE INT C ROB BIO, P379
Kanda T, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P1265, DOI 10.1109/IRDS.2002.1043918
KIATISEVI P, 2004, P INT C INF TECHN AP, P256
Kortenkamp D, 1996, PROCEEDINGS OF THE THIRTEENTH NATIONAL CONFERENCE ON
ARTIFICIAL INTELLIGENCE AND THE EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL
INTELLIGENCE CONFERENCE, VOLS 1 AND 2, P915
MINSKY M, 1974, MIT AL LAB MEMO, P306
Patterson DW, 1990, INTRO ARTIFICIAL INT
Pavlovic VI, 1997, IEEE T PATTERN ANAL, V19, P677, DOI 10.1109/34.598226
Satoh S., 2000, Proceedings Fourth IEEE International Conference on Automatic
Face and Gesture Recognition (Cat. No. PR00580), P163, DOI 10.1109/AFGR.2000.840629
STURMAN DJ, 1994, IEEE COMPUT GRAPH, V14, P30, DOI 10.1109/38.250916
TORRAS C, 1995, ROBOT AUTON SYST, V15, P11, DOI 10.1016/0921-8890(95)00013-6
TURK M, 1991, J COGNITIVE NEUROSCI, V3, P71, DOI 10.1162/jocn.1991.3.1.71
Ueno H, 2002, IEICE T INF SYST, VE85D, P657
WALDHERR S, 2000, J AUTONOMOUS ROBOTS, P151
Yang MH, 2002, IEEE T PATTERN ANAL, V24, P34, DOI 10.1109/34.982883
ZHANG T, 2004, P 6 JOINT C KNOWL BA, P149
ZHANG T, 2004, P IEEE INT C SYST MA, P2865
NR 25
TC 30
Z9 30
U1 2
U2 16
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD AUG 31
PY 2007
VL 55
IS 8
BP 643
EP 657
DI 10.1016/j.robot.2007.03.002
PG 15
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 204DW
UT WOS:000249024800004
DA 2018-01-22
ER

PT J
AU Hirai, M
Hiraki, K
AF Hirai, Masahiro
Hiraki, Kazuo
TI Differential neural responses to humans vs. robots: An event-related
potential study
SO BRAIN RESEARCH
LA English
DT Article
DE action observation; inversion effect; event-related potential; human and
robotic appearance; biological motion
ID BIOLOGICAL MOTION PERCEPTION; HUMAN FACE PERCEPTION; UPSIDE-DOWN FACES;
BRAIN ACTIVITY; OCCIPITOTEMPORAL CORTEX; PREMOTOR CORTEX; INVERTED
FACES; HUMAN-BODY; RECOGNITION; INVERSION
AB Do we perceive humanoid robots as human beings? Recent neuroimaging studies have
reported similarity in the neural processing of human and robot actions in the
superior temporal sulcus area but a differential neural response in the premotor
area. These studies suggest that the neural activity of the occipitotemporal region
would not be affected by appearance information. Unlike those studies, in this
study, by using the inversion effect as an index, we demonstrated for the first
time that the appearance information of a presented action affects neural responses
in the occipitotemporal region. In event-related potential (ERP) studies, the
inversion effect is the phenomenon whereby an upright face- and body-sensitive ERP
component in the occipitotemporal region is enhanced and delayed up to 200 ms in
response to an inverted face and body, but not to an inverted object. We used three
kinds of walking animation with different appearance information (human, robot, and
point-light) as well as inverted stimuli of each appearance. The anatomical
structure and walking speed of the presented stimuli were all identical. The
results showed that the inversion effect occurred in the right occipitotemporal
region only in response to human appearance, and not robotic and point-light
appearances. That is, the amplitude of the inverted condition of human appearance
was significantly larger than that of the upright condition only. Our results,
which are contrary to other recent neuroimaging studies, suggested that appearance
information affects the neural response in the occipitotemporal region. (c) 2007
Elsevier B.V. All rights reserved.
C1 Univ Tokyo, Grad Sch Arts & Sci, Dept Multi Disciplinary Sci, Course Gen Syst
Studies,Meguro Ku, Tokyo 1538902, Japan.
RP Hirai, M (reprint author), Univ Tokyo, Grad Sch Arts & Sci, Dept Multi
Disciplinary Sci, Course Gen Syst Studies,Meguro Ku, 3-4-1 Komaba, Tokyo 1538902,
Japan.
EM hirai@ardbeg.c.u-tokyo.ac.jp; khiraki@idea.c.u-tokyo.ac.jp
CR Aguirre GK, 1998, NEURON, V21, P373, DOI 10.1016/S0896-6273(00)80546-2
Allison T, 1999, CEREB CORTEX, V9, P415, DOI 10.1093/cercor/9.5.415
Bauml KH, 1997, ACTA PSYCHOL, V95, P107, DOI 10.1016/S0001-6918(96)00039-X
Bentin S, 1996, J COGNITIVE NEUROSCI, V8, P551, DOI 10.1162/jocn.1996.8.6.551
BERTENTHAL BI, 1987, J EXP PSYCHOL HUMAN, V13, P577, DOI 10.1037//0096-
1523.13.4.577
Bonda E, 1996, J NEUROSCI, V16, P3737
Castiello U, 2003, J EXP PSYCHOL HUMAN, V29, P416, DOI 10.1037/0096-
1523.29.2.416
Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
DIAMOND R, 1986, J EXP PSYCHOL GEN, V115, P107, DOI 10.1037/0096-3445.115.2.107
DITTRICH WH, 1993, PERCEPTION, V22, P15, DOI 10.1068/p220015
Dittrich WH, 1996, PERCEPTION, V25, P727, DOI 10.1068/p250727
Downing PE, 2001, SCIENCE, V293, P2470, DOI 10.1126/science.1063414
Farah MJ, 1998, PSYCHOL REV, V105, P482, DOI 10.1037/0033-295X.105.3.482
FARAH MJ, 1995, J EXP PSYCHOL HUMAN, V21, P628, DOI 10.1037//0096-1523.21.3.628
FARAH MJ, 1995, VISION RES, V35, P2089, DOI 10.1016/0042-6989(94)00273-O
Foucher JR, 2003, BMC NEUROSCI, V4, DOI 10.1186/1471-2202-4-22
Gallese V, 1996, BRAIN, V119, P593, DOI 10.1093/brain/119.2.593
Gauthier I, 1997, VISION RES, V37, P1673, DOI 10.1016/S0042-6989(96)00286-6
Gauthier I, 2002, J EXP PSYCHOL HUMAN, V28, P431, DOI 10.1037//0096-
1523.28.2.431
Gauthier I, 2000, NAT NEUROSCI, V3, P191, DOI 10.1038/72140
Grossman E, 2000, J COGNITIVE NEUROSCI, V12, P711, DOI 10.1162/089892900562417
Grossman ED, 2001, VISION RES, V41, P1475, DOI 10.1016/S0042-6989(00)00317-5
Haxby JV, 1999, NEURON, V22, P189, DOI 10.1016/S0896-6273(00)80690-X
Hirai M, 2006, COGNITION, V99, pB15, DOI 10.1016/j.cognition.2005.05.003
Hirai M, 2005, COGNITIVE BRAIN RES, V23, P387, DOI
10.1016/j.cogbrainres.2004.11.005
Hirai M, 2003, NEUROSCI LETT, V344, P41, DOI 10.1016/S0304-3940(03)00413-0
Hirai M, 2006, NEUROSCI LETT, V403, P299, DOI 10.1016/j.neulet.2006.05.002
HOMAN RW, 1987, ELECTROEN CLIN NEURO, V66, P376, DOI 10.1016/0013-4694(87)90206-
9
Ishiguro H, 2003, SPRINGER TRAC ADV RO, V6, P179
Itier RJ, 2006, NEUROIMAGE, V29, P667, DOI 10.1016/j.neuroimage.2005.07.041
Itier RJ, 2004, NEUROREPORT, V15, P1261, DOI 10.1097/01.wnr.0000127827.3576.d8
JOHANSSON G, 1973, PERCEPT PSYCHOPHYS, V14, P201, DOI 10.3758/BF03212378
Jokisch D, 2005, BEHAV BRAIN RES, V157, P195, DOI 10.1016/j.bbr.2004.06.025
Kanwisher N, 2000, NAT NEUROSCI, V3, P759, DOI 10.1038/77664
Kilner JM, 2003, CURR BIOL, V13, P522, DOI 10.1016/S0960-9822(03)00165-9
Linkenkaer-Hansen K, 1998, NEUROSCI LETT, V253, P147, DOI 10.1016/S0304-
3940(98)00586-2
McCarthy G, 1999, CEREB CORTEX, V9, P431, DOI 10.1093/cercor/9.5.431
MELTZOFF AN, 1995, DEV PSYCHOL, V31, P838, DOI 10.1037/0012-1649.31.5.838
Moscovitch M, 1997, J COGNITIVE NEUROSCI, V9, P555, DOI
10.1162/jocn.1997.9.5.555
Pavlova M, 2000, PERCEPT PSYCHOPHYS, V62, P889, DOI 10.3758/BF03212075
Peelen MV, 2005, J NEUROPHYSIOL, V93, P603, DOI 10.1152/jn.00513.2004
Pelphrey KA, 2003, J NEUROSCI, V23, P6819
PERRETT D I, 1990, International Journal of Comparative Psychology, V4, P25
Perrett DI, 1998, COGNITION, V67, P111, DOI 10.1016/S0010-0277(98)00015-8
PERRETT DI, 1988, BEHAV BRAIN RES, V29, P245, DOI 10.1016/0166-4328(88)90029-0
Press C, 2005, COGNITIVE BRAIN RES, V25, P632, DOI
10.1016/j.cogbrainres.2005.08.020
Rakover SS, 1997, PERCEPT PSYCHOPHYS, V59, P752, DOI 10.3758/BF03206021
Reed CL, 2003, PSYCHOL SCI, V14, P302, DOI 10.1111/1467-9280.14431
RHODES G, 1993, BRAIN COGNITION, V22, P19, DOI 10.1006/brcg.1993.1022
RHODES G, 1993, COGNITION, V47, P25, DOI 10.1016/0010-0277(93)90061-Y
Rossignol MJ, 2001, EUR STUD AMER HIST, V1, P63
Rossion B, 2000, NEUROREPORT, V11, P69, DOI 10.1097/00001756-200001170-00014
Rossion B, 2003, NEUROIMAGE, V20, P1609, DOI 10.1016/j.neuroimage.2003.07.010
Rossion B, 2002, PSYCHOL SCI, V13, P250, DOI 10.1111/1467-9280.00446
Rousselet GA, 2004, J VISION, V4, P13, DOI 10.1167/4.1.2
SHIFFRAR M, 1990, PSYCHOL SCI, V1, P257, DOI 10.1111/j.1467-9280.1990.tb00210.x
Stekelenburg JJ, 2004, NEUROREPORT, V15, P777, DOI
10.1097/01.wnr.0000119730.93564.e8
SUMI S, 1984, PERCEPTION, V13, P283, DOI 10.1068/p130283
Tai YF, 2004, CURR BIOL, V14, P117, DOI 10.1016/j.cub.2004.01.005
Taylor MJ, 2001, NEUROREPORT, V12, P1671, DOI 10.1097/00001756-200106130-00031
Tucker DM, 1993, ELECTROENCEPHALOGR C, V87, P145
VALENTINE T, 1988, BRIT J PSYCHOL, V79, P471, DOI 10.1111/j.2044-
8295.1988.tb02747.x
Watanabe S, 2003, NEUROSCIENCE, V116, P879, DOI 10.1016/S0306-4522(02)00752-2
Watanabe S, 2002, NEUROSCI LETT, V325, P163, DOI 10.1016/S0304-3940(02)00257-4
Woodward AL, 1998, COGNITION, V69, P1, DOI 10.1016/S0010-0277(98)00058-4
YIN RK, 1969, J EXP PSYCHOL, V81, P141, DOI 10.1037/h0027474
NR 66
TC 4
Z9 4
U1 1
U2 3
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0006-8993
J9 BRAIN RES
JI Brain Res.
PD AUG 24
PY 2007
VL 1165
BP 105
EP 115
DI 10.1016/j.brainres.2007.05.078
PG 11
WC Neurosciences
SC Neurosciences & Neurology
GA 210OC
UT WOS:000249464500013
PM 17658496
DA 2018-01-22
ER

PT J
AU Valin, JM
Yamamoto, S
Rouat, J
Michaud, F
Nakadai, K
Okuno, HG
AF Valin, Jean-Marc
Yamamoto, Shun'ichi
Rouat, Jean
Michaud, Francois
Nakadai, Kazuhiro
Okuno, Hiroshi G.
TI Robust recognition of simultaneous speech by a mobile robot
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article
DE cocktail party; geometric; source separation (GSS); microphone array;
missing feature theory; robot audition; speech recognition
ID SPECTRAL AMPLITUDE ESTIMATOR; SOUND LOCALIZATION; ENHANCEMENT; MODEL
AB This paper describes a system that gives a mobile robot the ability to perform
automatic speech recognition with simultaneous speakers. A microphone array is used
along with a real-time implementation of geometric source separation (GSS) and a
postfilter that gives a further reduction of interference from other sources. The
postfilter is also used to estimate the reliability of spectral features and
compute a missing feature mask. The mask is used in a missing feature theory-based
speech recognition system to recognize the speech from simultaneous Japanese
speakers in the context of a humanoid robot. Recognition rates are presented for
three simultaneous speakers located at 2 in from the robot. The system was
evaluated on a 200-word vocabulary at different azimuths between sources, ranging
from 10 degrees to 90 degrees. Compared to the use of the microphone array source
separation alone, we demonstrate an average reduction in relative recognition error
rate of 24% with the postfilter and of 42% when the missing features approach is
combined with the postfilter. We demonstrate the effectiveness of our multisource
microphone array postfilter and the improvement it provides when used in
conjunction with the missing features theory.
C1 CSIRO, ICT Ctr, Sydney, NSW, Australia.
Univ Sherbrooke, Dept Elect & Comp Engn, Sherbrooke, PQ J1K 2R1, Canada.
Kyoto Univ, Grad Sch Informat, Dept Intelligence Sci & Technol, Kyoto 6068501,
Japan.
Honda Res Inst Japan Co Ltd, Saitama 3510114, Japan.
RP Valin, JM (reprint author), CSIRO, ICT Ctr, Sydney, NSW, Australia.
EM jean-marc.valin@csiro.au; shunichi@kuis.kyoto-u.ac.jp;
jean.rouat@usherbrooke.ca; francois.michaud@usherbrooke.ca;
nakadai@jp.honda-ri.com; okuno@kuis.kyoto-u.ac.jp
OI Okuno, Hiroshi/0000-0002-8704-4318
CR Aarabi P, 2004, IEEE T SYST MAN CY B, V34, P1763, DOI 10.1109/TSMCB.2004.830345
Araki S., 2004, P IEEE INT C AC SPEE, P881, DOI DOI 10.1109/ICASSP.2004.1326686
Asano F., 2001, P INT C SPEECH PROC, P1013
Asano F., 1999, P IEEE INT C MULT FU, P243, DOI DOI 10.1109/MFI.1999.815997
Asoh H., 2004, P FUS, P805
ASOH H, 1997, P 15 INT JOINT C ART, V1, P880
BARKER J, 2000, P ICSLP 2000, V1, P373
Barker J., 2001, P EUR 2001 ESCA, P213
Blanchet M., 1992, P EUSIPCO 92, VVI, P391
Boll S., 1979, P 1979 INT C AC SPEE, P200
Brooks R. A., 1999, Computation for metaphors, analogy, and agents, P52, DOI
10.1007/3-540-48834-0_5
CHERRY EC, 1953, J ACOUST SOC AM, V25, P975, DOI 10.1121/1.1907229
Cho HY, 2004, IEEE SIGNAL PROC LET, V11, P581, DOI 10.1109/LSP.2004.827922
CHOI C, 2003, P IEEE RSJ INT C INT, P3516
Cohen I, 2001, SIGNAL PROCESS, V81, P2403, DOI 10.1016/S0165-1684(01)00128-1
COHEN I, 2002, P IEEE INT C AC SPEE, P901
COOKE M, 1994, P ICSLP, P1555
COOKE M, 2001, P IEEE INT C SPOK LA, V34, P267
EPHRAIM Y, 1985, IEEE T ACOUST SPEECH, V33, P443, DOI 10.1109/TASSP.1985.1164550
EPHRAIM Y, 1984, IEEE T ACOUST SPEECH, V32, P1109, DOI
10.1109/TASSP.1984.1164453
Fujita M, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P960
Haykin S., 2002, ADAPTIVE FILTER THEO
Huang J, 1999, ROBOT AUTON SYST, V27, P199, DOI 10.1016/S0921-8890(99)00002-0
IRIE R, 1995, THESIS MIT
Lee A, 2001, P EUR C SPEECH COMM, P1691
Lippmann R. P., 1987, Proceedings: ICASSP 87. 1987 International Conference on
Acoustics, Speech, and Signal Processing (Cat. No.87CH2396-0), P705
Matsusaka Y., 1999, P 6 EUR C SPEECH COM, P1723
MATSUSAKA Y, 2001, P EUROSPEECH, P2171
MCCOWAN I, 2002, P IEEE INT C SPOK LA, P2181
MCCOWAN I, 2002, P IEEE INT C AC SPEE, V1, P905
Ming J, 2003, COMPUT SPEECH LANG, V17, P287, DOI 10.1016/S0885-2308(03)00003-2
MING J, 2004, IEEE T SPEECH AUDIO, V10, P403
Mungamuru B, 2004, IEEE T SYST MAN CY B, V34, P1526, DOI
10.1109/TSMCB.2004.826398
Nakadai K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1147
Nakadai K, 2000, SEVENTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE
(AAAI-2001) / TWELFTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
(IAAI-2000), P832
Nakadai K, 2000, 2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS, P1453, DOI 10.1109/IROS.2000.893225
Nakadai K., 2002, P IEEE INT C SPOK LA, P193
Nakadai K., 2001, P EUROSPEECH, P1193
NAKADAI K, 2002, P 18 NAT C ART INT A, P431
O'Shaughnessy D, 2003, P IEEE, V91, P1272, DOI 10.1109/JPROC.2003.817117
Okuno HG, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1402, DOI
10.1109/IROS.2001.977177
Parra LC, 2002, IEEE T SPEECH AUDI P, V10, P352, DOI 10.1109/TSA.2002.803443
PEARCE D, 2001, P IEEE AUT SPEECH RE, P131
Prodanov PJ, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P1332, DOI 10.1109/IRDS.2002.1043939
Raj B, 2004, SPEECH COMMUN, V43, P275, DOI 10.1016/j.specom.2004.03.007
Renevey P., 2001, P EUR 2001 ESCA, P1107
Seltzer ML, 2004, SPEECH COMMUN, V43, P379, DOI 10.1016/j.specom.2004.03.006
SELTZER ML, 2003, P IEEE INT C AC SPEE, P408
Theobalt C, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P1338, DOI 10.1109/IRDS.2002.1043940
VALIN J, 2006, P ICASSP, P841
Valin JM, 2007, ROBOT AUTON SYST, V55, P216, DOI 10.1016/j.robot.2006.08.004
VALIN JM, 2004, P IEEE INT C AC SPEE, P1
VALLIN JM, 2004, P IEEE RSJ INT C INT, P2123
VALLIN JM, 2004, P IEEE INT C ROB AUT, V1, P1033
VANHAMME H, 2003, P EUROSPEECH, P1973
Van hamme H., 2004, P ICASSP, P213
Wang Q. H., 2004, Information Fusion, V5, P131, DOI 10.1016/j.inffus.2003.10.002
Yamamoto S., 2004, P IEEE INT C ROB AUT, P1517
YAMAMOTO S, 2004, P IEEE RSJ INT C INT, P2111
Zelinski R., 1988, P IEEE INT C AC SPEE, V5, P2578
ZHANG Y, 2001, P INNS IEEE INT JOIN, P1059
NR 61
TC 43
Z9 43
U1 0
U2 5
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD AUG
PY 2007
VL 23
IS 4
BP 742
EP 752
DI 10.1109/TRO.2007.900612
PG 11
WC Robotics
SC Robotics
GA 199BN
UT WOS:000248671300009
DA 2018-01-22
ER

PT J
AU Neo, ES
Yokoi, K
Kajita, S
Tanie, K
AF Neo, Ee Sian
Yokoi, Kazuhito
Kajita, Shuuji
Tanie, Kazuo
TI Whole-body motion generation integrating operator's intention and
robot's autonomy in controlling humanoid robots
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article; Proceedings Paper
CT IEEE International Conference on Robotics and Automation
CY APR 26-MAY 01, 2004
CL New Orleans, LA
SP IEEE Robot & Automat Soc, IEEE
DE humanoid robot; motion control; teleoperation
ID MANIPULATORS
AB This paper introduces a framework for whole-body motion generation integrating
operator's control and robot's autonomous functions during online control of
humanoid robots. Humanoid robots are biped machines that usually possess multiple
degrees of freedom (DOF). The complexity of their structure and the difficulty in
maintaining postural stability make the whole-body control of humanoid robots
fundamentally different from fixed-base manipulators. Getting hints from human
conscious and subconscious motion generations, the authors propose a method of
generating whole-body motions that integrates the operator's command input and the
robot's autonomous functions. Instead of giving commands to all the joints all the
time, the operator selects only the necessary points of the humanoid robot's body
for manipulation. This paper first explains the concept of the system and the
framework for integrating operator's command and autonomous functions in whole-body
motion generation. Using the framework, autonomous functions were constructed for
maintaining postural stability constraint while satisfying the desired trajectory
of operation points, including the feet, while interacting with the environment.
Finally, this paper reports on the implementation of the proposed method to
teleoperate two 30-DOF humanoid robots, HRP-1S and HRP-2, by using only two 3-DOF
joysticks. Experiments teleoperating the two robots are reported to verify the
effectiveness of the proposed method.
C1 Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res Inst, Tsukuba 3058568,
Japan.
Tokyo Metropolitan Univ, Fac Syst Design, Tokyo 1910065, Japan.
RP Neo, ES (reprint author), Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res
Inst, Tsukuba 3058568, Japan.
EM rio.neo@aist.go.jp; Kazuhito.Yokoi@aist.go.jp; s.kajita@aist.go.jp;
ktanie@cc.tmit.ac.jp
RI Yokoi, Kazuhito/K-2046-2012; Kajita, Shuuji/M-5010-2016
OI Yokoi, Kazuhito/0000-0003-3942-2027; Kajita, Shuuji/0000-0001-8188-2209
CR Baerlocher P, 2004, VISUAL COMPUT, V20, P402, DOI 10.1007/s00371-004-0244-4
BOULIC R, 1997, P INT C ADV ROB ICAR, P589
Brooks R., 2004, INT J HUM ROBOT, V1, P1
Brooks VB, 1986, NEURAL BASIS MOTOR C
Chaumette F, 2001, IEEE T ROBOTIC AUTOM, V17, P719, DOI 10.1109/70.964671
Choi SI, 2000, ROBOTICA, V18, P143, DOI 10.1017/S0263574799001861
Dillmann R, 2004, ROBOT AUTON SYST, V47, P109, DOI 10.1016/j.robot.2004.03.005
Gienger M., 2005, P IEEE RAS INT C HUM, P238
HASUNUMA H, 2005, P IEEE RAS RSJ INT C, P245
INOUE K, 2002, P IEEE INT C ROB AUT, V3, P2259
Ishiwata Y., 1998, P 16 ANN C ROB SOC J, P355
Kagami S., 2000, P 4 INT WORKSH ALG F, P329
KAJITA F, 2003, P IEEE RSJ INT C INT, V2, P1644
Kajita S., 2002, P IEEE INT C ROB AUT, V1, P31
KANEKO K, 2004, P IEEE 2004 INT C RO, V2, P1083
KAWAUCHI N, 2003, P IEEE INT C ROB AUT, V3, P2973
KHATIB O, 1987, IEEE T ROBOTIC AUTOM, V3, P43, DOI 10.1109/JRA.1987.1087068
Khatib O., 2004, INT J HUM ROBOT, V1, P29, DOI [10.1142/S0219843604000058, DOI
10.1142/S0219843604000058]
KURIHARA K, 2002, P ANN C ROB MECH
Kuroki Y., 2003, P IEEE RSJ INT C INT, V2, P1394
MACIEJEWSKI AA, 1990, IEEE COMPUT GRAPH, V10, P63, DOI 10.1109/38.55154
Marchand E., 1996, P IEEE RSJ INT C INT, V3, P1083
Nakamura Y, 1991, ADV ROBOTICS REDUNDA
NAKAOKA S, 2005, P INT C INT ROB SYST, P3157
Neo ES, 2005, IEEE-ASME T MECH, V10, P546, DOI 10.1109/TMECH.2005.856112
Neo E.S., 2006, P IEEE RAS INT C HUM, P327
NISHIWAKI K, 2001, P IEEE INT C ROB AUT, V4, P4110
NISHIWAKI K, 2004, P IEEE RAS RSJ INT C, V2, P672
NISHIYAMA T, 2003, P IEEE INT C ROB AUT, V3, P2979
OKADA K, 2005, P IEEE INT C MECH AU, V4, P1772
Pollard N S, 2002, P IEEE INT C ROB AUT, V2, P1390
Sakagami Y., 2002, P IEEE RSJ INT C INT, V3, p2478 , DOI DOI
10.1109/IRDS.2002.1041641
SAWASAKI N, 2003, P IEEE INT C ROB AUT, V3, P2992
Schaal S, 2003, PHILOS T R SOC B, V358, P537, DOI 10.1098/rstb.2002.1258
Sentis L., 2004, P 4 IEEE RAS INT C H, V2, P764, DOI DOI
10.1109/ICHR.2004.1442684
STILMAN M, 2006, P IEEE RSJ INT C INT, P820
SUGIHARA T, 2002, P IEEE RSJ INT C INT, V3, P2575
Tachi S, 2003, ADV ROBOTICS, V17, P199, DOI 10.1163/156855303764018468
TAKUBO T, 2004, P IEEE RSJ INT C INT, V1, P509
TEVATIA G, 2000, P IEEE INT C ROB AUT, V1, P294, DOI DOI
10.1109/ROBOT.2000.844073
UDE A, 2000, P IEEE RAS RSJ INT C
VUKOBRAT.M, 1969, IEEE T BIO-MED ENG, VBM16, P1, DOI 10.1109/TBME.1969.4502596
WILLIAMS RL, 1994, P IEEE INT C ROB AUT, V2, P992
Yamane K, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P855, DOI 10.1109/ROBOT.2002.1013464
Yokoi K., 2001, P IEEE RAS INT C HUM, P259
YOKOI K, 2006, P INT S EXPT ROB
NR 46
TC 29
Z9 32
U1 1
U2 5
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD AUG
PY 2007
VL 23
IS 4
BP 763
EP 775
DI 10.1109/TRO.2007.903818
PG 13
WC Robotics
SC Robotics
GA 199BN
UT WOS:000248671300011
DA 2018-01-22
ER

PT J
AU Nakaoka, S
Nakazawa, A
Kanehiro, F
Kaneko, K
Morisawa, M
Hirukawa, H
Ikeuchi, K
AF Nakaoka, Shin'ichiro
Nakazawa, Atsushi
Kanehiro, Fumio
Kaneko, Kenji
Morisawa, Mitsuharu
Hirukawa, Hirohisa
Ikeuchi, Katsushi
TI Learning from observation paradigm: Leg task models for enabling a biped
humanoid robot to imitate human dances
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article
DE learning from observation; imitation; biped humanoid robot; motion
capture; entertainment robotics
AB This paper proposes a framework that achieves the Learning from Observation
paradigm for learning dance motions. The framework enables a humanoid robot to
imitate dance motions captured from human demonstrations. This study especially
focuses on leg motions to achieve a novel attempt in which a biped-ope robot
imitates not only upper body motions but also leg motions including steps. Body
differences between the robot and the original dancer make the problem difficult
because the differences prevent the robot front straight forward-v following the
original motions and they also change dynamic body balance. We propose leg task
models, which play a key role in solving the problem. Low-level tasks in leg motion
are modelled so that they clearly provide essential information required,for
keeping dynamic stability and important motion characteristics. The models divide
the problem of adapting motions into the problem of recognizing a sequence of the
tasks and the problem of executing the task sequence. We have developed a method
for recognizing the tasks from captured motion data and a method for generating the
motions of the tasks that can be executed by existing robots including HRP-2. HRP-2
successfully performed the generated motions, which imitated a traditional folk
dance performed by human dancers.
C1 Univ Tokyo, Inst Ind Sci, Meguro Ku, Tokyo 1538505, Japan.
Osaka Univ, Cybermedia Ctr, Osaka 5600043, Japan.
Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res Inst, Tsukuba, Ibaraki
3058568, Japan.
RP Nakaoka, S (reprint author), Univ Tokyo, Inst Ind Sci, Meguro Ku, 4-6-1 Komaba,
Tokyo 1538505, Japan.
EM nakaoka@cvl.iis.u-tokyo.ac.jp; nakazawa@cmc.osaka-u.ac.jp;
f-kanehiro@aist.go.jp; k.kaneko@aist.go.jp; m.morisawa@aist.go.jp;
hiro.hirukawa@aist.go.jp; ki@cvl.iis.u-tokyo.ac.jp
RI Morisawa, Mitsuharu/M-3327-2016; KANEKO, Kenji/M-5360-2016; Hirukawa,
Hirohisa/B-4209-2017; Kanehiro, Fumio/L-8660-2016; Nakaoka,
Shin'ichiro/M-5396-2016
OI Morisawa, Mitsuharu/0000-0003-0056-4335; KANEKO,
Kenji/0000-0002-1888-8787; Hirukawa, Hirohisa/0000-0001-5779-011X;
Kanehiro, Fumio/0000-0002-0277-3467; Nakaoka,
Shin'ichiro/0000-0002-2346-1251
CR IKEMOTO L, 2006, P 2006 S INT 3D GRAP, P49
IKEUCHI K, 1994, IEEE T ROBOTIC AUTOM, V10, P368, DOI 10.1109/70.294211
Kagami S., 2000, P 4 INT WORKSH ALG F
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
Kang SB, 1997, IEEE T ROBOTIC AUTOM, V13, P81, DOI 10.1109/70.554349
Kosuge K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P3459
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
Kuroki Y, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1394
KUROKI Y, 2003, P 2003 IEEE INT C RO, P471
NAKAZAWA A, 2002, P 8 INT C VIRT SYST, P952
Nishiwaki K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2684, DOI 10.1109/IRDS.2002.1041675
Pollard NS, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1390, DOI 10.1109/ROBOT.2002.1014737
RILEY M, 2000, P 2000 WORKSH INT RO, P35
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Shin HJ, 2001, ACM T GRAPHIC, V20, P67, DOI 10.1145/502122.502123
Takamatsu J, 2006, IEEE T ROBOT, V22, P65, DOI 10.1109/TRO.2005.855988
Tamiya Y., 1999, J ROBOTICS SOC JAPAN, V17, P268
VUKOBRATOVIC M, 1990, BIPED LOCOMOTION DYN, V7
Yamane K, 2003, IEEE T ROBOTIC AUTOM, V19, P421, DOI 10.1109/TRA.2003.810579
YAMANE K, 2003, P IEEE INT C ROB AUT, P3834
Yokoi K., 2001, P IEEE RAS INT C HUM, P259
NR 23
TC 60
Z9 61
U1 2
U2 10
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD AUG
PY 2007
VL 26
IS 8
BP 829
EP 844
DI 10.1177/0278364907079430
PG 16
WC Robotics
SC Robotics
GA 197GV
UT WOS:000248543500004
DA 2018-01-22
ER

PT J
AU Seyama, J
Nagayama, RS
AF Seyama, Jun'ichiro
Nagayama, Ruth S.
TI The Uncanny Valley: Effect of realism on the impression of artificial
human faces
SO PRESENCE-TELEOPERATORS AND VIRTUAL ENVIRONMENTS
LA English
DT Article
ID HUMAN-ROBOT INTERACTION; HEAD ORIENTATION; GAZE; PERCEPTION; DIRECTION;
ATTENTION
AB Roboticists believe that people will have an unpleasant impression of a humanoid
robot that has an almost, but not perfectly, realistic human appearance. This is
called the uncanny valley, and is not limited to robots, but is also applicable to
any type of human-like object, such as dolls, masks, facial caricatures, avatar's
in virtual reality, and characters in computer graphics movies. The present study
investigated the uncanny valley by measuring observers' impressions of facial
images whose degree of realism was manipulated by morphing between artificial and
real human faces. Facial images yielded the most unpleasant impressions when they
were highly realistic, supporting the hypothesis of the uncanny valley. However,
the uncanny valley was confirmed only when morphed faces had abnormal features such
as bizarre eyes. These results suggest that to have an almost perfectly realistic
human appearance is a necessary but not a sufficient condition for the uncanny
valley. The uncanny valley emerges only when there is also an abnormal feature.
C1 Univ Tokyo, Dept Psychol, Fac Letters, Bunkyo Ku, Tokyo 1130033, Japan.
RP Seyama, J (reprint author), Univ Tokyo, Dept Psychol, Fac Letters, Bunkyo Ku, 7-
3-1 Hongo, Tokyo 1130033, Japan.
EM Lseyama@mail.ecc.u-tokyo.ac.jp
RI Seyama, Jun'ichiro/D-2898-2009
CR Arita A, 2005, COGNITION, V95, pB49, DOI 10.1016/j.cognition.2004.08.001
Aylett RS, 2004, LECT NOTES COMPUT SC, V3025, P496
Breazeal C, 2003, INT J HUM-COMPUT ST, V59, P119, DOI 10.1016/S1071-
5819(03)00018-1
Canamero L, 2001, IEEE T SYST MAN CY A, V31, P454, DOI 10.1109/3468.952719
Di Salvo C. F., 2002, P 4 C DES INT SYST P, P321
Driver J, 1999, VIS COGN, V6, P509, DOI 10.1080/135062899394920
Ekman P., 1975, UNMASKING FACE GUIDE
Fabri M., 2004, Virtual Reality, V7, P66, DOI 10.1007/s10055-003-0116-7
Fong T, 2003, ROBOT AUTON SYST, V42, P143, DOI 10.1016/S0921-8890(02)00372-X
Friesen CK, 1998, PSYCHON B REV, V5, P490, DOI 10.3758/BF03208827
Garau M, 2005, PRESENCE-TELEOP VIRT, V14, P104, DOI 10.1162/1054746053890242
HARA F, 2004, P 2004 IEEE WORKSH R, P20
Haxby JV, 2000, TRENDS COGN SCI, V4, P223, DOI 10.1016/S1364-6613(00)01482-0
Hietanen JK, 1999, NEUROREPORT, V10, P3443, DOI 10.1097/00001756-199911080-00033
Hinds PJ, 2004, HUM-COMPUT INTERACT, V19, P151, DOI 10.1207/s15327051hci1901&2_7
Hodgins JK, 1998, IEEE T VIS COMPUT GR, V4, P307, DOI 10.1109/2945.765325
Kobayashi H, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1123
Kontsevich LL, 2004, VISION RES, V44, P1493, DOI 10.1016/j.visres.2003.11.027
Langton SRH, 2000, Q J EXP PSYCHOL-A, V53, P825, DOI 10.1080/027249800410562
Minato T, 2004, LECT NOTES COMPUT SC, V3029, P424
MINATO T, 2004, P 2 INT WORKSH MAN M, P373
Mori M., 1970, ENERGY, V7, P33, DOI DOI 10.1109/MRA.2012.2192811
Murray J. E., 2003, PERCEPTION FACES OBJ, P75
Norman D. A, 2004, EMOTIONAL DESIGN WHY
REICHARDT J, 1978, ROBOTS FACT FICTION
Rhodes G, 2001, PERCEPTION, V30, P611, DOI 10.1068/p3123
Rhodes G., 2002, FACIAL ATTRACTIVENES
Robert F., 2000, FACES
Sakagami Y, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2478, DOI 10.1109/IRDS.2002.1041641
Seyama J, 2005, VIS COGN, V12, P103, DOI 10.1080/13506280444000111
Seyama J, 2002, PERCEPTION, V31, P1153, DOI 10.1068/p3317
Shinozawa K, 2005, INT J HUM-COMPUT ST, V62, P267, DOI
10.1016/j.ijhcs.2004.11.003
Turati C, 2004, CURR DIR PSYCHOL SCI, V13, P5, DOI 10.1111/j.0963-
7214.2004.01301002.x
Wages R, 2004, LECT NOTES COMPUT SC, V3166, P216
Walker PM, 2003, PERCEPTION, V32, P1117, DOI 10.1068/p5098
Wilson HR, 2002, VISION RES, V42, P2909, DOI 10.1016/S0042-6989(02)00362-0
WOODS S, 2004, P 2004 IEEE INT WORK, P20
YIN RK, 1969, J EXP PSYCHOL, V81, P141, DOI 10.1037/h0027474
ZEIT U, 1992, KUNSTLER MACHEN PUPP
NR 39
TC 124
Z9 125
U1 5
U2 38
PU MIT PRESS
PI CAMBRIDGE
PA ONE ROGERS ST, CAMBRIDGE, MA 02142-1209 USA
SN 1054-7460
EI 1531-3263
J9 PRESENCE-TELEOP VIRT
JI Presence-Teleoper. Virtual Env.
PD AUG
PY 2007
VL 16
IS 4
BP 337
EP 351
DI 10.1162/pres.16.4.337
PG 15
WC Computer Science, Cybernetics; Computer Science, Software Engineering
SC Computer Science
GA 197YF
UT WOS:000248592600001
DA 2018-01-22
ER

PT J
AU An, M
Taura, T
Shiose, T
AF An, Min
Taura, Toshiharu
Shiose, Takayuki
TI A study on acquiring underlying behavioral criteria for manipulator
motion by focusing on learning efficiency
SO IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND
HUMANS
LA English
DT Article
DE behavioral criteria; genetic algorithm (GA); learning system; modeling;
robot programming
AB Conventional humanoid robotic behaviors are directly programmed depending on the
programmer's personal experience. With this method, the behaviors usually appear
unnatural. It is believed that a humanoid robot can acquire new adaptive behaviors
from a human, if the robot has the criteria underlying such behaviors. The aim of
this paper is to establish a method of acquiring human behavioral criteria. The
advantage of acquiring behavioral criteria is that the humanoid robots can then
autonomously produce behaviors for similar tasks with the same behavioral criteria
but without transforming data obtained from morphologically different humans every
time for every task. In this paper, a manipulator robot learns a model behavior,
and another robot is created to perform the model behavior instead of being
performed by a person. The model robot is presented some behavioral criteria, but
the learning manipulator robot does not know them and tries to infer them. In
addition, because of the difference between human and robot bodies, the body sizes
of the learning robot and the model robot are also made different. The method of
obtaining behavioral criteria is realized by comparing the efficiencies with which
the learning robot ' learns the model behaviors. Results from the simulation have
demonstrated that the proposed method is effective for obtaining behavioral
criteria. The proposed method, the details regarding the simulation, and the
results are presented in this paper.
C1 Kobe Univ, Grad Sch Sci & Technol, Kobe, Hyogo 6578501, Japan.
Kyoto Univ, Grad Sch Informat, Kyoto 6068501, Japan.
RP An, M (reprint author), Kobe Univ, Grad Sch Sci & Technol, Kobe, Hyogo 6578501,
Japan.
EM anwang@ma-5.scitec.kobe-u.ac.jp; taura@mech.kobe-u.ac.jp;
shiose@i.kyoto-u.ac.jp
RI Taura, Toshiharu/M-4858-2016; Totsukawa, Nobuhisa/D-2028-2017
CR AN M, 2003, P 12 IASTED INT C AP, P163
Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
BAJRACHARYA M, 2001, AAAI SPRING S ROBOT
Biles J., 1994, P 1994 INT COMP MUS, P131
BILLARD A, 2000, P SOC INT AG HUM LOO, P9
Dautenhahn K, 1999, APPL ARTIF INTELL, V13, P211, DOI 10.1080/088395199117397
Donald M., 1991, ORIGINS MODERN MIND
Goldberg D. E, 1989, GENETIC ALGORITHMS S
ISHIDA R, 2001, P 28 SICE S INT SYST, P147
KINNEAR JKE, 1994, ADV GENETIC PROGRAMM, P21
Kuniyoshi Y., 1989, P INT S IND ROB, P119
Miyamoto H, 1998, NEURAL NETWORKS, V11, P1331, DOI 10.1016/S0893-6080(98)00062-8
Navon R, 1997, J COMPUT CIVIL ENG, V11, P175, DOI 10.1061/(ASCE)0887-
3801(1997)11:3(175)
SAMUEL A, 1959, IBM J RES DEV, V3, P535
Schaal S., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P288, DOI 10.1109/ROBOT.2000.844072
SHIOSE T, 2002, P SEAL SING NOV 18 2
TEVATIA G, 2000, P IEEE INT C ROB AUT, P3847
NR 17
TC 6
Z9 6
U1 0
U2 1
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855 USA
SN 1083-4427
J9 IEEE T SYST MAN CY A
JI IEEE Trans. Syst. Man Cybern. Paart A-Syst. Hum.
PD JUL
PY 2007
VL 37
IS 4
BP 445
EP 455
DI 10.1109/TSMCA.2006.886352
PG 11
WC Computer Science, Cybernetics; Computer Science, Theory & Methods
SC Computer Science
GA 180UB
UT WOS:000247390500001
DA 2018-01-22
ER

PT J
AU Yamaoka, F
Kanda, T
Ishiguro, H
Hagita, N
AF Yamaoka, F.
Kanda, T.
Ishiguro, H.
Hagita, N.
TI How contingent should a lifelike robot be? The relationship between
contingency and complexity
SO CONNECTION SCIENCE
LA English
DT Article
DE human-robot interface; human-robot interaction; android science;
contingency; complexity
ID INTERACTIVE HUMANOID ROBOTS; INFANTS
AB We believe that lifelikeness is important to make it possible for a
communication robot to interact naturally with people. In this research, the
relationship between contingency and complexity in a lifelike robot was
investigated. We have developed a robot control system to control experimentally
contingency and complexity in interaction by combining a humanoid robot with a
motion-capturing system. Two independent experiments were conducted with different
levels of interaction complexity: simple interaction and complex interaction.
Experiments in the simple interaction situation indicated that subjects felt that
the robot behaved autonomously when contingency was low, and there was no
significant relationship between contingency and a lifelike impression. On the
other hand, experiments in the complex interaction situation showed that the robot
gave the subjects the impression of being more autonomous and lifelike when
contingency was high.
C1 ATR Intelligent Robot & Commun Labs, Kyoto 6190288, Japan.
RP Yamaoka, F (reprint author), ATR Intelligent Robot & Commun Labs, 2-2-2
Hikaridai,Keihanna Sci City, Kyoto 6190288, Japan.
EM yamaoka@atr.jp
RI Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825
CR Arita A, 2005, COGNITION, V95, pB49, DOI 10.1016/j.cognition.2004.08.001
BRADLEY RA, 1952, BIOMETRIKA, V39, P324, DOI 10.2307/2334029
GOETZ J, 2003, P 12 IEEE WORKSH ROB
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Hirai M, 2005, COGNITIVE BRAIN RES, V22, P301, DOI
10.1016/j.cogbrainres.2004.08.008
Ishiguro H., 2005, COGSCI 2005 WORKSH, P1
KAMASIMA M, 2004, IEEE RSJ INT C INT R, P2506
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kanda T., 2003, INT JOINT C ART INT, P177
Kanda T., 2005, IEEE RSJ INT C INT R, P62
Kidd C, 2004, P IEEE RSJ INT C INT
Lakin JL, 2003, J NONVERBAL BEHAV, V27, P145, DOI 10.1023/A:1025389814290
MAGYAR J, 1998, INT C INF STUD
Mori M., 1970, ENERGY, V7, P33, DOI DOI 10.1109/MRA.2012.2192811
Nakadai K., 2001, P INT JOINT C ART IN, P1425
Rakison DH, 2001, PSYCHOL BULL, V127, P209, DOI 10.1037//0033-2909.127.2.209
Sakamoto D, 2005, INT J HUM-COMPUT ST, V62, P247, DOI
10.1016/j.ijhcs.2004.11.001
SHIOMI M, 2004, IEEE RSJ INT C INT R, P1340
TRAFTON JG, 2005, IEEE T SYST MAN CYB, V25, P460
Walters M.L., 2005, P IEEE RAS INT C HUM, P450
YAMAOKA F, 2005, IEEE INT C HUM ROB H, P406
NR 21
TC 11
Z9 11
U1 0
U2 2
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 4 PARK SQUARE, MILTON PARK, ABINGDON OX14 4RN, OXON, ENGLAND
SN 0954-0091
J9 CONNECT SCI
JI Connect. Sci.
PD JUN
PY 2007
VL 19
IS 2
BP 143
EP 162
DI 10.1080/09540090701371519
PG 20
WC Computer Science, Artificial Intelligence; Computer Science, Theory &
Methods
SC Computer Science
GA 194HQ
UT WOS:000248335700003
DA 2018-01-22
ER

PT J
AU Kajita, S
Nagasaki, T
Kaneko, K
Hirukawa, H
AF Kajita, Shuuji
Nagasaki, Takashi
Kaneko, Kenji
Hirukawa, Hirohisa
TI ZMP-based biped running control - The HRP-2LR humanoid biped robot
SO IEEE ROBOTICS & AUTOMATION MAGAZINE
LA English
DT Article
DE biped robot; running control; ZMP
C1 Tokyo Inst Technol, Natl Inst Adv Ind Sci & Technol, Minist Int Trade & Ind,
CALTECH, Tokyo 152, Japan.
Univ Tsukuba, Human Resocia Co Ltd, Tsukuba, Ibaraki 305, Japan.
Kobe Univ, Kobe, Hyogo 657, Japan.
Natl Inst Adv Ind Sci & Technol, Electrotech Lab, Minist Econ Trade & Ind,
Tsukuba, Ibaraki, Japan.
Stanford Univ, Intelligent Syst Res Inst, Stanford, CA 94305 USA.
RP Kajita, S (reprint author), 1-1-1,Umezono, Tsukuba, Ibaraki 305, Japan.
EM s.kajita@aist.go.jp
RI Hirukawa, Hirohisa/B-4209-2017; KANEKO, Kenji/M-5360-2016; Kajita,
Shuuji/M-5010-2016
OI Hirukawa, Hirohisa/0000-0001-5779-011X; KANEKO,
Kenji/0000-0002-1888-8787; Kajita, Shuuji/0000-0001-8188-2209
CR AHMADI M, P 1999 ICRA, P1689
Chevallereau C, 2005, INT J ROBOT RES, V24, P431, DOI 10.1177/0278364905054929
GIENGER M, P 2001 ICRA, P4140
HIRAI K, P 1998 ICRA, P1321
HODGINS JK, P 1996 ICRA, P3271
INOUE H, 2001, P INT S ROB, P1478
Kagami S, 2002, AUTON ROBOT, V12, P71, DOI 10.1023/A:1013210909840
KAJITA S, P 2003 IROS, P1644
KAJITA S, P 2005 ICRA, P618
KAJITA S, P 2001 ICRA, P2299
KANEKO K, P 2002 ICRA, P38
KIM J, P 2004 ICRA, P623
NAGASAKA K, P 2004 ICRA, P3189
Nagasaki T.;, 2004, P IEEE RSJ INT C INT, P136
Nishiwaki K, 2000, 2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS, P1559, DOI 10.1109/IROS.2000.895195
PLAYTER RR, 1992, P IFTOMM JC INT S TH, P669
Raibert M.H., 1986, LEGGED ROBOTS BALANC
TEUKOLSKEY SA, 1993, NUMERICAL RECIPES C
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
YAMAGUCHI J, P 1999 ICRA, P368
NR 20
TC 55
Z9 55
U1 1
U2 11
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1070-9932
EI 1558-223X
J9 IEEE ROBOT AUTOM MAG
JI IEEE Robot. Autom. Mag.
PD JUN
PY 2007
VL 14
IS 2
BP 63
EP 72
DI 10.1109/MRA.2007.380655

PG 10
WC Automation & Control Systems; Robotics
SC Automation & Control Systems; Robotics
GA 187FR
UT WOS:000247833500010
DA 2018-01-22
ER

PT J
AU Ohashi, E
Aiko, T
Tsuji, T
Nishi, H
Ohnishi, K
AF Ohashi, Eijiro
Aiko, Takahiro
Tsuji, Toshiaki
Nishi, Hiroaki
Ohnishi, Kouhei
TI Collision avoidance method of humanoid robot with arm force
SO IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
LA English
DT Article
DE arm force; biped robot; collision avoidance; humanoid robot; pushing
motion; trajectory planning; walking robot
ID BIPED ROBOT
AB This paper describes a collision avoidance method for a biped robot with an
upper body. We propose a method wherein the robot stops in front of an obstacle by
generating arm force. When the robot detects the obstacle by the arm tip, it should
stop short of the obstacle to avoid crash. Hence, we propose trajectory planning in
consideration of the pushing force of the arm. The arm force is controlled to be
generated as a function of the distance from the robot body to the obstacle. The
closer the robot approaches the obstacle, the larger the arm force becomes. As a
result, the robot can stop by utilizing the arm force. In case the obstacle is
unmovable, the robot can stop by exerting arm force. If it is movable, the robot
can continue walking by pushing it. In this paper, the linear inverted pendulum
mode (LIPM) and the idea of orbital energy are introduced, and then, we extend LIPM
and orbital energy in consideration of the dynamics of the arm force. The extended
orbital energy is utilized to discriminate whether the robot can stop or not and to
modify the trajectory of the robot to avoid collision.
C1 Keio Univ, Fac Sci & Technol, Dept Syst Design Engn, Yokohama, Kanagawa 2238522,
Japan.
RP Ohashi, E (reprint author), Canon Inc, Tokyo 1468501, Japan.
EM ei2ro@sum.sd.keio.ac.jp; aiko@sum.sd.keio.ac.jp;
tsuji@sum.sd.keio.ac.jp; west@sd.keio.ac.jp; ohnishi@sd.keio.ac.jp
RI Tsuji, Toshiaki/I-7784-2012; Ohnishi, Kouhei/A-6543-2011; Nishi,
Hiroaki/D-7998-2014
OI Tsuji, Toshiaki/0000-0002-4532-4514;
CR HARADA K, 2004, P IEEE INT C ROB AUT, V1, P616
Harada K., 2003, P IEEE INT C ROB AUT, V2, P1627
Harada K., 2003, P IEEE RSJ INT C INT, V1, P75
HWANG Y, 2003, P IEEE RSJ INT C INT, V2, P1901
KAJITA S, 1992, IEEE T ROBOTIC AUTOM, V8, P431, DOI 10.1109/70.149940
KAJITA S, 2004, P IEEE ICRA, V1, P629
Kajita S., 2002, P IEEE ICRA, V3, P2755
Loffler K, 2004, IEEE T IND ELECTRON, V51, P972, DOI 10.1109/TIE.2004.834948
MORISAWA M, 2001, P IEEE INT C IND EL, V3, P2184
MURAKAMI T, 1993, IEEE T IND ELECTRON, V40, P259, DOI 10.1109/41.222648
NAGASAKI T, 2004, P 2004 IEEE RSJ INT, V1, P136
Pfeiffer F., 2002, P IEEE INT C ROB AUT, VVol.3, P3129
Tan KC, 2005, IEEE T IND ELECTRON, V52, P906, DOI 10.1109/TIE.2005.847577
TERADA K, 2003, P IEEE INT C INT ROB, V2, P1382
YAMAMOTO T, 2002, P IEEE INT C INT ROB, V3, P2467
YOSHIDA H, 2001, P IEEE RSJ INT C INT, V1, P266
YOSHIDA H, 2000, P IEEE RSJ INT C INT, V3, P1924
NR 17
TC 32
Z9 32
U1 1
U2 9
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855 USA
SN 0278-0046
J9 IEEE T IND ELECTRON
JI IEEE Trans. Ind. Electron.
PD JUN
PY 2007
VL 54
IS 3
BP 1632
EP 1641
DI 10.1109/TIE.2007.894728
PG 10
WC Automation & Control Systems; Engineering, Electrical & Electronic;
Instruments & Instrumentation
SC Automation & Control Systems; Engineering; Instruments & Instrumentation
GA 178EC
UT WOS:000247203000036
DA 2018-01-22
ER

PT J
AU Davison, AJ
Reid, ID
Molton, ND
Stasse, O
AF Davison, Andrew J.
Reid, Ian D.
Molton, Nicholas D.
Stasse, Olivier
TI MonoSLAM: Real-time single camera SLAM
SO IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
LA English
DT Article
DE autonomous vehicles; 3D/stereo scene analysis; tracking
AB We present a real-time algorithm which can recover the 3D trajectory of a
monocular camera, moving rapidly through a previously unknown scene. Our system,
which we dub MonoSLAM, is the first successful application of the SLAM methodology
from mobile robotics to the "pure vision" domain of a single uncontrolled camera,
achieving real time but drift-free performance inaccessible to Structure from
Motion approaches. The core of the approach is the online creation of a sparse but
persistent map of natural landmarks within a probabilistic framework. Our key novel
contributions include an active approach to mapping and measurement, the use of a
general motion model for smooth camera movement, and solutions for monocular
feature initialization and feature orientation estimation. Together, these add up
to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC
and camera hardware. This work extends the range of robotic systems in which SLAM
can be usefully applied, but also opens up new areas. We present applications of
MonoSLAM to real-time 3D localization and mapping for a high-performance full-size
humanoid robot and live augmented reality with a hand-held camera.
C1 Univ London Imperial Coll Sci Technol & Med, Dept Comp, London SW7 2AZ, England.
Univ Oxford, Dept Engn Sci, Robot Res Grp, Oxford OX1 3PJ, England.
Imagineer Syst Ltd, Surrey Technol Ctr, Guildford GU2 7YG, Surrey, England.
CNRS, AIST Cent 2, Japanese French Robot Lab, JRL, Tsukuba, Ibaraki 3058568,
Japan.
RP Davison, AJ (reprint author), Univ London Imperial Coll Sci Technol & Med, Dept
Comp, 180 Queens Gate, London SW7 2AZ, England.
EM ajd@doc.ic.ac.uk; ian@robots.ox.ac.uk; ndm@imagineersystems.com;
olivier.stasse@aist.go.jp
RI Stasse, Olivier/E-6220-2010
CR Ayache N., 1991, ARTIFICIAL VISION MO
Baker S, 2004, INT J COMPUT VISION, V56, P221, DOI
10.1023/B:VISI.0000011205.11775.fd
BEARDSLEY PA, 1995, FIFTH INTERNATIONAL CONFERENCE ON COMPUTER VISION,
PROCEEDINGS, P58, DOI 10.1109/ICCV.1995.466806
BETGEBREZETZ S, 1996, P IEEE INT C ROB AUT
Bosse M., 2002, P IEEE INT C IM PROC
BOSSE M, 2003, P IEEE INT C ROB AUT
BURSCHKA D, 2004, P IEEE INT C ROB AUT
CASTELLANOS JA, 1998, THESIS U ZARAGOZA SP
CHIUSO A, 2000, P 6 EUR C COMP VIS
CSORBA M, 1997, THESIS U OXFORD
Davison A., 2001, P IEEE C COMP VIS PA
Davison AJ, 2002, IEEE T PATTERN ANAL, V24, P865, DOI 10.1109/TPAMI.2002.1017615
DAVISON AJ, 1998, THESIS U UXFORD
Davison A. J., 2004, P IFAC S INT AUT VEH
Davison A. J., 1998, P 5 EUR C COMP VIS F, P809
DAVISON AJ, 2005, P 10 INT C COMP VIS
Davison A.J., 2003, P 9 INT C COMP VIS
Eade E., 2006, P IEEE C COMP VIS PA
Eustice R., 2005, P ROB SCI SYST
Fitzgibbon A. W, 1998, P EUR C COMP VIS, P311
FOXLIN E, 2002, P IEEE RSJ C INT ROB
Harris C., 1987, P 3 ALV VIS C, P233
Jin HL, 2003, VISUAL COMPUT, V19, P377, DOI 10.1007/s00371-003-0202-6
JUNG I, 2003, P 9 INT C COMP VIS
Kaneko K., 2004, P IEEE INT C ROB AUT
Karlsson N., 2005, P IEEE INT C ROB AUT
KIM JH, 2003, P IEEE INT C ROB AUT, P406
KONOLIGE K, 1999, P INT S COMP INT ROB
LEONARD JJ, 1990, THESIS U OXFORD
Lepetit Vincent, 2005, Foundations and Trends in Computer Graphics and Vision,
V1, P1, DOI 10.1561/0600000001
Lowe D. G., 1999, P INT C COMP VIS, V2, P1150, DOI DOI 10.1109/ICCV.1999.790410
MANYIKA J, 1993, THESIS U OXFORD
MCLAUCHLAN PF, 1995, P 5 INT C COMP VIS
Molton N., 2004, P 15 BRIT MACH VIS C
MOLTON ND, 2004, P AS C COMP VIS
Montemerlo M., 2002, P AAAI NAT C ART INT
Montiel J., 2006, P ROBOTICS SCI SYSTE
MOUTARLIER P, 1989, P INT S ROB RES
NEIRA J, 1997, P INT S INT ROB S
Newman P, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1802, DOI 10.1109/ROBOT.2002.1014803
Newman P., 2005, P IEEE INT C ROB AUT
NEWMAN PM, 2003, P INT JOINT C ART IN
Newman P. M., 1999, THESIS U SYDNEY
NISTER D, 2004, P IEEE C COMP VIS PA
Pollefeys M, 1998, P 6 INT C COMP VIS B, P90
Rahimi A, 2001, EIGHTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOL I,
PROCEEDINGS, P315, DOI 10.1109/ICCV.2001.937535
Sabe K., 2004, P IEEE INT C ROB AUT
Shi J., 1994, P IEEE C COMP VIS PA, P593, DOI DOI 10.1109/CVPR.1994.323794
SIM R, 2005, P IJCAI WORKSH REAS
Smith R., 1987, P 4 INT S ROB RES
SOLA J, 2005, P IEEE RSJ C INT ROB
Swaminathan R, 2000, IEEE T PATTERN ANAL, V22, P1172, DOI 10.1109/34.879797
TAKAOKA Y, 2004, P IEEE INT C SYST MA, P4444
Thrun S, 2000, P IEEE INT C ROB AUT
NR 54
TC 1013
Z9 1062
U1 55
U2 388
PU IEEE COMPUTER SOC
PI LOS ALAMITOS
PA 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1314 USA
SN 0162-8828
EI 1939-3539
J9 IEEE T PATTERN ANAL
JI IEEE Trans. Pattern Anal. Mach. Intell.
PD JUN
PY 2007
VL 29
IS 6
BP 1052
EP 1067
DI 10.1109/TPAMI.2007.1049
PG 16
WC Computer Science, Artificial Intelligence; Engineering, Electrical &
Electronic
SC Computer Science; Engineering
GA 155TJ
UT WOS:000245600800010
PM 17431302
DA 2018-01-22
ER

PT J
AU Motoi, N
Ikebe, M
Ohnishi, K
AF Motoi, Naoki
Ikebe, Motomi
Ohnishi, Kouhei
TI Real-time gait planning for pushing motion of humanoid robot
SO IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS
LA English
DT Article
DE biped robot; humanoid robot; pushing motion; reaction force; real-time
gait planning; unknown object; walking robot; zero moment point (ZMP)
ID MANIPULATION
AB This paper describes real-time gait planning for pushing motion of humanoid
robots. This method deals with an object whose mass is not known. In order that a
humanoid robot pushes an unknown object in both single support phase and double
support phase, real-time gait planning for pushing the unknown object is proposed.
Real-time gait planning consists of zero moment point (ZMP) modification and cycle
time modification. ZMP modification is the method that modifies the influence of
reaction force to ZMP. By cycle time modification, the period in double support
phase is modified to avoid a robot tipping over. These modifications are calculated
from reaction force on arms in every cycle. With these methods, trajectory planning
for pushing an unknown object in both single support phase and double support phase
is calculated. Even if parameters of an object and friction coefficient on the
floor vary, the robot keeps on walking while pushing an object. The effectiveness
of the proposed method is confirmed by a simulation and an experiment.
C1 Keio Univ, Fac Sci & Technol, Dept Syst Design Engn, Yokohama, Kanagawa 223,
Japan.
RP Motoi, N (reprint author), Keio Univ, Fac Sci & Technol, Dept Syst Design Engn,
Yokohama, Kanagawa 223, Japan.
EM motoi@sum.sd.keio.ac.jp; ikebe@sum.sd.keio.ac.jp; ohnishi@sd.keio.ac.jp
RI Ohnishi, Kouhei/A-6543-2011; Motoi, Noriko/A-9718-2009
OI Motoi, Noriko/0000-0001-7098-3311
CR HARADA K, 2004, P IEEE INT C ROB AUT, V1, P616
HARADA K, 2005, P IEEE INT C ROB AUT, P1712
HARADA K, 2004, J ROBOTICS SOC JAPAN, V22, P392
HARADA K, 2005, J ROBOTICS SOC JAPAN, V23, P752
Harada K, 2006, IEEE T ROBOT, V22, P568, DOI 10.1109/TRO.2006.870649
KAJITA S, 2005, P IEEE INT C ROB AUT, V3, P616
Kajita S., 2001, P IEEE RSJ INT C INT, V1, P239
Kaneko K, 2005, P IEEE RSJ INT C INT, P634
KONNO A, 2005, P IEEE INT C INT ROB, P2518
MORISAWA M, 2000, P 6 INT WORKSH ADV M, P537
MURAKAMI T, 1993, J ROBOTICA SOC JAPAN, V11, P131
NAGASAKI T, 2004, P 2004 IEEE RSJ INT, V1, P136
NISHIKAWA N, 1999, IEEJ T IND APPL, V119, P1507
Ohnishi K, 1996, IEEE-ASME T MECH, V1, P56, DOI 10.1109/3516.491410
Pfeiffer F., 2002, P IEEE INT C ROB AUT, VVol.3, P3129
Takubo T., 2006, J ROBOT SOC JPN, V24, P614
TERADA K, 2003, P IEEE INT C INT ROB, V2, P1382
Tsuji T., 2002, P 7 INT WORKSH ADV M, P478
YAMAMOTO T, 2002, P IEEE INT C INT ROB, V3, P2467
Yoshida E, 2005, IEEE INT CONF ROBOT, P1040
YOSHIDA H, 2001, P IEEE RSJ INT C INT, V1, P266
YOSHIDA H, 2000, P IEEE RSJ INT C INT, V3, P1924
NR 22
TC 18
Z9 19
U1 0
U2 6
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855 USA
SN 1551-3203
J9 IEEE T IND INFORM
JI IEEE Trans. Ind. Inform.
PD MAY
PY 2007
VL 3
IS 2
BP 154
EP 163
DI 10.1109/TII.2007.898469
PG 10
WC Automation & Control Systems; Computer Science, Interdisciplinary
Applications; Engineering, Industrial
SC Automation & Control Systems; Computer Science; Engineering
GA 172RN
UT WOS:000246821800008
DA 2018-01-22
ER

PT J
AU He, XY
Kojima, R
Hasegawa, O
AF He, Xiaoyuan
Kojima, Ryo
Hasegawa, Osamu
TI Developmental word grounding through a growing neural network with a
humanoid robot
SO IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS
LA English
DT Article
DE interactive learning; language; mental models; robot; word grounding
AB This paper presents an unsupervised approach of integrating speech and visual
information without using any prepared data. The approach enables a humanoid robot,
Incremental Knowledge Robot 1 (IKR1), to learn word meanings. The approach is
different from most existing approaches in that the robot learns online from audio-
visual input, rather than from stationary data provided in advance. In addition,
the robot is capable of learning incrementally, which is considered to be
indispensable to lifelong learning. A noise-robust self-organized growing neural
network is developed to represent the topological structure of unsupervised online
data. We are also developing an active-learning mechanism, called "desire for
knowledge," to let the robot select the object for which it possesses the least
information for subsequent learning. Experimental results show that the approach
raises the efficiency of the learning process. Based on audio and visual data, they
construct a mental model for the robot, which forms a basis for constructing IKR1's
inner world and builds a bridge connecting the learned concepts with current and
past scenes.
C1 Tokyo Inst Technol, Dept Computat & Intelligence & Syst Sci, Yokohama, Kanagawa
2268503, Japan.
Japan Sci & Technol Agcy, PRESTO, Saitama 3320012, Japan.
RP He, XY (reprint author), Tokyo Inst Technol, Dept Computat & Intelligence & Syst
Sci, Yokohama, Kanagawa 2268503, Japan.
EM gashouen@isl.titech.ac.jp; ryo@hi.pi.titech.ac.jp; oh@ieee.org
CR ELMAN JL, 1993, COGNITION, V48, P71, DOI 10.1016/0010-0277(93)90058-4
Gersho A., 1992, VECTOR QUANTIZATION
GORIN AL, 1999, P IEEE WORKSH SPEECH, P293
Hamad S, 1990, PHYSICA D, V42, P335, DOI DOI 10.1016/0167-2789(90)90087-6
Hsiao K. Y., 2003, P IEEE RSJ INT C INT, P928
IMAI S, 2002, SPEECH SIGNAL PROCES
Iwahashi N, 2003, INFORM SCIENCES, V156, P109, DOI [10.1016/S0020-0255(03)00167-
1, 10.1016/S0020-0255(03)00167-0]
Iwahashi N., 2004, P 13 IEEE WORKSH ROB, P437
MYERS CS, 1981, AT&T TECH J, V60, P1389, DOI 10.1002/j.1538-7305.1981.tb00272.x
Oates T., 2000, Proceedings of the Fourth International Conference on Autonomous
Agents, P227, DOI 10.1145/336595.337389
Piaget J., 1956, CHILDS CONCEPTION SP
Regier T., 1996, HUMAN SEMANTIC POTEN
Roy D, 2004, IEEE T SYST MAN CY B, V34, P1374, DOI 10.1109/TSMCB.2004.823327
Roy DK, 2002, COGNITIVE SCI, V26, P113, DOI 10.1016/S0364-0213(01)00061-1
Shen FR, 2006, NEURAL NETWORKS, V19, P90, DOI 10.1016/j.neunet.2005.04.006
SISKIND J, 2000, MODELS LANGUAGE ACQU, P121
Siskind JM, 1996, COGNITION, V61, P39, DOI 10.1016/S0010-0277(96)00728-7
Steels L, 2003, ROBOT AUTON SYST, V43, P163, DOI 10.1016/S0921-8890(02)00357-3
Steels L., 1997, P 4 EUR C ART LIF, P474
Steels L., 2002, TRANSITION LANGUAGE, P252
THRUN S, 1995, P INT JOINT C ART IN, P1217
Vogt P, 2005, ARTIF INTELL, V167, P206, DOI 10.1016/j.artint.2005.04.010
WACHSMUTH S, 2000, VIDERE J COMPUT VIS, V1, P61
Weng JY, 2001, SCIENCE, V291, P599, DOI 10.1126/science.291.5504.599
Yu C, 2004, PROCEEDING OF THE NINETEENTH NATIONAL CONFERENCE ON ARTIFICIAL
INTELLIGENCE AND THE SIXTEENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL
INTELLIGENCE, P488
NR 25
TC 4
Z9 5
U1 0
U2 2
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855 USA
SN 1083-4419
J9 IEEE T SYST MAN CY B
JI IEEE Trans. Syst. Man Cybern. Part B-Cybern.
PD APR
PY 2007
VL 37
IS 2
BP 451
EP 462
DI 10.1109/TSMCB.2006.885309
PG 12
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Computer Science, Cybernetics
SC Automation & Control Systems; Computer Science
GA 148WY
UT WOS:000245109300018
PM 17416171
DA 2018-01-22
ER

PT J
AU Zhang, T
Ueno, H
AF Zhang, Tao
Ueno, Haruki
TI Knowledge model-based heterogeneous multi-robot system implemented by a
software platform
SO KNOWLEDGE-BASED SYSTEMS
LA English
DT Article
DE knowledge model; heterogeneous multi-robot system; software platform
ID ARCHITECTURE; COORDINATION; ROBOTS
AB In order to build a heterogeneous multi-robot system that can be regarded as a
primitive prototype of a future symbiotic autonomous human-robot system, this paper
presents a knowledge model-based heterogeneous multi-robot system implemented by a
software platform. With using frame-based knowledge representation, a knowledge
model is constructed to describe the features of heterogeneous robots as well as
their behaviors according to human requests. The required knowledge for
constructing a heterogeneous multi-robot system can be therefore integrated in a
single model and shared by robots and users. Based on the knowledge model, the
heterogeneous multi-robot system is defined in the Software Platform for Agents and
Knowledge Management (SPAK) by use of XML format. With the use of SPAK, the
cooperative operation of heterogeneous multi-robot system can be flexibly carried
out. The proposed system not only can integrate heterogeneous robots and various
techniques for robots, but also can automatically perform human-robot interaction
and plan robot behaviors taking into account different intelligence of robots
corresponding to human requests. In this paper, an actual heterogeneous multi-robot
system comprised by humanoid robots (Robovie, PINO), mobile robot (Scout) and
entertainment robot dog (AlBO) is built and the effectiveness of the proposed
system is verified by experiment. (c) 2006 Elsevier B.V. All rights reserved.
C1 Tsinghua Univ, Dept Automat, Beijing 100084, Peoples R China.
Natl Inst Informat, Intelligent Syst Res Div, Tokyo 1018430, Japan.
RP Zhang, T (reprint author), Tsinghua Univ, Dept Automat, Beijing 100084, Peoples
R China.
EM taozhang@tsinghua.edu.cn; ueno@nii.ac.jp
CR Ampornaramveth V, 2004, IEICE T INF SYST, VE87D, P886
Breazeal C, 2003, ROBOT AUTON SYST, V42, P167, DOI 10.1016/S0921-8890(02)00373-1
BRUCE B, 1975, ARTIF INTELL, V6, P327, DOI 10.1016/0004-3702(75)90020-X
Castelpietra C, 2000, 2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT
ROBOTS AND SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS, P1385, DOI
10.1109/IROS.2000.893214
HASANUZZAMAN M, 2004, P INT C SYST MAN CYB, P2883
Huntsberger T, 2003, IEEE T SYST MAN CY A, V33, P550, DOI
10.1109/TSMCA.2003.817398
Iocchi L, 2003, AUTON ROBOT, V15, P155, DOI 10.1023/A:1025589008533
Koller D, 1998, FIFTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-
98) AND TENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICAL INTELLIGENCE
(IAAI-98) - PROCEEDINGS, P580
Minsky Marvin, 1974, 306 MIT AI LAB
Parker LE, 1998, IEEE T ROBOTIC AUTOM, V14, P220, DOI 10.1109/70.681242
Rowley HA, 1998, IEEE T PATTERN ANAL, V20, P23, DOI 10.1109/34.655647
TAIRYOU G, 1998, THESIS TOKYO DENKI U
Turk M. A., 1991, Proceedings 1991 IEEE Computer Society Conference on Computer
Vision and Pattern Recognition (91CH2983-5), P586, DOI 10.1109/CVPR.1991.139758
Ueno H, 2002, IEICE T INF SYST, VE85D, P657
Wang J, 1997, IEE P-CONTR THEOR AP, V144, P73, DOI 10.1049/ip-cta:19970869
Yoshida T, 2003, IEEE ASME INT C ADV, P223, DOI 10.1109/AIM.2003.1225099
YOSHIMI T, 1993, P IEEE RSJ INT C INT, P1237
NR 17
TC 6
Z9 7
U1 1
U2 13
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0950-7051
EI 1872-7409
J9 KNOWL-BASED SYST
JI Knowledge-Based Syst.
PD APR
PY 2007
VL 20
IS 3
BP 310
EP 319
DI 10.1016/j.knosys.2006.04.019
PG 10
WC Computer Science, Artificial Intelligence
SC Computer Science
GA 158HW
UT WOS:000245783400012
DA 2018-01-22
ER

PT J
AU Nishimura, Y
Kushida, K
Dohi, H
Ishizuka, M
Takeuchi, J
Nakano, M
Tsujino, H
AF Nishimura, Yoshitaka
Kushida, Kazutaka
Dohi, Hiroshi
Ishizuka, Mitsuru
Takeuchi, Johane
Nakano, Mikio
Tsujino, Hiroshi
TI Development of multimodal presentation markup language MPML-HR for
humanoid robots and its psychological evaluation
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE multimodal presentation; humanoid robot; script language; SD method
AB Animated agents that act and speak as attendants to guests on shopping web sites
are becoming increasingly popular. Inspired by this development, we propose a new
method of presentation using a humanoid robot. Humanoid presentations are effective
in a real environment because they can move and took around at the audience similar
to a human presenter. We developed a simple script language for multimodal
presentations by a humanoid robot called MPML-HR, which is a descendant of the
Multimodal Presentation Markup Language (MPML) originally developed for animated
agents. MPML-HR allows many non-specialists to easily write multimodal
presentations for a humanoid robot. We further evaluated humanoid robots'
presentation ability using MPML-HR to find the difference in audience impressions
between the humanoid robot and the animated agent. Psychological evaluation was
conducted to compare the impressions of a humanoid robot's presentation with an
animated agent's presentation. Using the Semantic Differential (SD) method and
direct questioning, we measured the difference in audience impressions between an
animated agent and a humanoid robot.
C1 [Nishimura, Yoshitaka; Kushida, Kazutaka; Dohi, Hiroshi; Ishizuka, Mitsuru] Univ
Tokyo, Grad Sch Informat Sci & Technol, Bunkyo Ku, Tokyo 1138656, Japan.
[Takeuchi, Johane; Nakano, Mikio; Tsujino, Hiroshi] Honda Res Inst Japan Co Ltd,
Wako, Saitama 3510114, Japan.
RP Nishimura, Y (reprint author), Univ Tokyo, Grad Sch Informat Sci & Technol,
Bunkyo Ku, 7-3-1 Hongo, Tokyo 1138656, Japan.
EM johane.takeuchi@jp.honda-ri.com; nakano@jp.honda-ri.com;
tsujino@jp.honda-ri.com
RI Tsujino, Hiroshi/A-1198-2009
OI Tsujino, Hiroshi/0000-0001-8042-2796
CR ARAFA Y, 2004, LIFE LIKE CHARACTERS, P39
CAROLIS BD, 2004, LIDE LIKE CHARACTERS, P51
Kanda T, 2001, IEEE INT CONF ROBOT, P4166, DOI 10.1109/ROBOT.2001.933269
KANDA T, 2002, IEEE INT C ROB AUT I, P1848
Kushida Kazutaka, 2005, P 4 INT JOINT C AUT, P23
MARRIOTT A, VHML VIRTUAL HUMAN M
NAKATA T, 1997, J JAPAN ROBOTICS SOC, V15, P1068
NISHIMURA Y, 2005, P 5 IEEE RAS INT C H, P393
NOZAWA Y, 2004, P 13 IEEE INT WORKSH
OGATA T, 2000, J I SYSTEMS CONTROL, V13, P566
OKAZAKI N, 2002, P VIRT CONV CHAR APP
Ortony A., 1988, COGNITIVE STRUCTURE
PRENDINGER H, 2004, LIFE LIKE CHARACTERS, P213
Shinozawa K, 2005, INT J HUM-COMPUT ST, V62, P267, DOI
10.1016/j.ijhcs.2004.11.003
Snider J. G., 1969, SEMANTIC DIFFERENTIA
SUGANO S, 1997, J ROBOTICS SOC JAPAN, V15, P975
YAMATO J, 2003, HUMAN ROBOT DYNAMIC, V1, P37
Zong Y, 2000, INTERNATIONAL SYMPOSIUM ON MULTIMEDIA SOFTWARE ENGINEERING,
PROCEEDINGS, P359, DOI 10.1109/MMSE.2000.897236
NR 18
TC 2
Z9 2
U1 0
U2 4
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2007
VL 4
IS 1
BP 1
EP 20
DI 10.1142/S0219843607000947

PG 20
WC Robotics
SC Robotics
GA 214SY
UT WOS:000249759600001
DA 2018-01-22
ER

PT J
AU Or, J
Takanishi, A
AF Or, Jimmy
Takanishi, Atsuo
TI Effect of a flexible spine emotional belly dancing robot on human
perceptions
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE emotional belly dancing; full-body; flexible spine humanoid robot;
human-robot interaction; body postures; body language
ID POINT-LIGHT DISPLAYS; MOVEMENT
AB Recently, there has been a growing interest in human-robot interaction.
Researchers in artificial intelligence and robotics have built various types of
social robots which can express emotions through speech, facial expressions and
hand gestures. Although some of these robots are able to interact with humans in
interesting ways, they cannot move as naturally as we do because of the limited
number of degrees of freedom in their body torsos (some of them do not even have a
torso). Since we often express and perceive each other's emotions and motives at a
distance using body language alone, it would be good for the next generation of
humanoid robots to possess similar capabilities. As a first step towards this goal,
we developed a 28-DOF full-body humanoid robot as an experimental platform. Unlike
the current generation of humanoid robots, our robot has a flexible spine. This
feature is very important because counterbalancing movements of the spine are
required to maintain dynamic stability in humans and humanoid robots. Our robot can
belly dance and communicate affective motions via full-body movements. Using a
Central Pattern Generator (CPG) based controller, we generated rhythmic motions for
the arms, upper and lower bodies. We then conducted psychological experiments using
both the robot and a human actors. Statistical analyses were carried out to test
our hypotheses on human perception of affective movements. Experimental results
show that human subjects were able to perceive emotions from the robot based only
on its body motions, sometimes as well as recognizing the movements being performed
by the human actor. Our robot can be used to examine the relationship between the
movement of the spine, shoulders, arms, neck and head when attempting to reproduce
affective movements. Psychologists, actors, dancers and animators can benefit from
this line of research by learning how emotions can be conveyed through body motions
and knowing how body part movements combine to communicate emotional expressions.
C1 RIKEN, Inst Phys & Chem Res, Wako, Saitama 3510198, Japan.
Waseda Univ, Humanoid Robot Inst, Shinjyuku Ku, Tokyo 1698555, Japan.
RP Or, J (reprint author), RIKEN, Inst Phys & Chem Res, 2-1 Hirosawa, Wako, Saitama
3510198, Japan.
EM drjimmyor@yahoo.ca
CR AYAMA K, 1996, P GRAPH INT 96, P222
Badler NI, 1999, COMP ANIM CONF PROC, P128, DOI 10.1109/CA.1999.781206
Breazeal C, 2003, INT J HUM-COMPUT ST, V59, P119, DOI 10.1016/S1071-
5819(03)00018-1
Breazeal C, 2003, ADV ROBOTICS, V17, P97, DOI 10.1163/156855303321165079
Breazeal C, 2002, DESIGNING SOCIABLE R
BREAZEL C, 2004, WHO NEEDS EMOTIONS B
Brownlow S, 1997, PSYCHOL REC, V47, P411, DOI 10.1007/BF03395235
Dittrich WH, 1996, PERCEPTION, V25, P727, DOI 10.1068/p250727
EKEBERG O, 1993, BIOL CYBERN, V69, P363, DOI 10.1007/BF01185408
Ekman Paul, 1984, APPROACHES EMOTION, P319
Fitt S. S., 1988, DANCE KINESIOLOGY
ITOH K, 2004, P IEEE INT WORKSH RO, P347
KAPLAN F, 2004, INT J HUM ROBOT, P465
Kobayashi H, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P2215, DOI
10.1109/IROS.2001.976399
LIDA F, 1998, 7 IEEE INT WORKSH RO, P481
Mizuuchi I, 2003, ADV ROBOTICS, V17, P179, DOI 10.1163/156855303321165123
Mizuuchi I., 2002, P IEEE RSJ INT C INT, V3, P2527
Mizuuchi Ikuo, 2005, P 2005 IEEE RSJ INT, P692
MIZUUCHI I, 2001, P 2001 IEEE RSJ INT, V3, P2099
OR J, 2005, INT J HUM ROBOT, V2, P81
PEARSON H, 2004, ROBOT BELLY DANCER S
Pollick FE, 2001, COGNITION, V82, pB51, DOI 10.1016/S0010-0277(01)00147-0
ROLLS ET, 1990, COGNITION EMOTION, V4, P161, DOI 10.1080/02699939008410795
Russell J. A., 1997, PSYCHOL FACIAL EXPRE, P295, DOI DOI
10.1017/CBO9780511659911.015
Shibata S, 1998, INT J IND ERGONOM, V21, P483, DOI 10.1016/S0169-8141(97)00004-8
SOGON S, 1989, PSYCHOL REP, V65, P35, DOI 10.2466/pr0.1989.65.1.35
WALK RD, 1984, B PSYCHONOMIC SOC, V22, P437
WALLBOTT HG, 1991, EUR J SOC PSYCHOL, V21, P89, DOI 10.1002/ejsp.2420210107
NR 28
TC 5
Z9 5
U1 1
U2 7
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2007
VL 4
IS 1
BP 21
EP 48
DI 10.1142/S0219843607000935
PG 28
WC Robotics
SC Robotics
GA 214SY
UT WOS:000249759600002
DA 2018-01-22
ER

PT J
AU Ayaz, Y
Munawar, K
Malik, MB
Konno, A
Uchiyama, M
AF Ayaz, Yasar
Munawar, Khalid
Malik, Mohammad Bilal
Konno, Atsushi
Uchiyama, Masaru
TI Human-like approach to footstep planning among obstacles for humanoid
robots
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE footstep planning; robot motion planning; machine intelligence
AB Unlike wheeled robots, humanoid robots are able to cross obstacles by stepping
over or upon them. Conventional 2D methods for robot navigation fail to exploit
this unique ability of humanoids and thus design trajectories only by circumventing
obstacles. Recently, global algorithms have been presented that take into account
this feature of humanoids. However, due to high computational complexity, most of
them are very time consuming. In this paper, we present a new approach to footstep
planning in obstacle cluttered environments that employs a human-like strategy to
terrain traversal. Simulation results of its implementation on a model of the
Saika-3 humanoid robot are also presented. The algorithm, being one of reactive
nature, refutes previous claims that reactive algorithms fail to find successful
paths in complex obstacle cluttered environments.
C1 Natl Univ Sci & Technol, Coll Elect & Mech Engn, Dept Elect Engn, Rawalpindi
46000, Pakistan.
Tohoku Univ, Dept Aerosp Engn, Sendai, Miyagi 9808579, Japan.
RP Ayaz, Y (reprint author), Natl Univ Sci & Technol, Coll Elect & Mech Engn, Dept
Elect Engn, Peshawar Rd, Rawalpindi 46000, Pakistan.
EM yasar@ceme.edu.pk; yasar@space.mech.tohoku.ac.jp; mbmalik@ceme.edu.pk;
konno@space.mech.tohoku.ac.jp; uchiyama@space.mech.tohoku.ac.jp
RI Munawar, Khalid/L-4828-2013
OI Malik, Mohammad Bilal/0000-0003-2850-7412
CR Ayaz Y, 2004, ROMOCO' 04: PROCEEDINGS OF THE FOURTH INTERNATIONAL WORKSHOP ON
ROBOT MOTION AND CONTROL, P73
AYAZ Y, 2005, AUTONOMOUS FOOTSTEP
AYAZ Y, 2006, P IEEE RSJ INT C INT, P5490
CHESTNUTT J, 2004, P IEEE INT C HUM ROB
Chestnutt J., 2003, P IEEE INT C HUM ROB
Chestnutt J., 2005, P IEEE INT C ROB AUT, P631
Cormen T., 1994, INTRO ALGORITHMS
CRAIG JJ, 1999, INTRO ROBOTICS
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
KAGAMI S, 2000, 7 INT S EXPT ROB ISE, P41
Konno A, 2000, 2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS, P1565, DOI 10.1109/IROS.2000.895196
Kuffner J, 2005, SPR TRA ADV ROBOT, V15, P365
Kuffner JJ, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P500, DOI
10.1109/IROS.2001.973406
Latombe J.-C., 1991, ROBOT MOTION PLANNIN
MCGHEE RB, 1979, IEEE T SYST MAN CYB, V9, P176, DOI 10.1109/TSMC.1979.4310180
NAGASAKA K, 1999, P IEEE INT C SYST MA, P6
Nishiwaki K., 2002, P INT WORKSH HUM HUM, P2
Vukobratovic M., 2004, INT J HUM ROBOT, V1, P157, DOI DOI
10.1142/S0219843604000083
Yagi M, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P375, DOI 10.1109/ROBOT.1999.770007
NR 19
TC 9
Z9 9
U1 0
U2 2
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2007
VL 4
IS 1
BP 125
EP 149
DI 10.1142/S0219843607000960
PG 25
WC Robotics
SC Robotics
GA 214SY
UT WOS:000249759600005
DA 2018-01-22
ER

PT J
AU Roccella, S
Carrozza, MC
Cappiello, G
Cabibihan, JJ
Laschi, C
Dario, P
AF Roccella, Stefano
Carrozza, Maria Chiara
Cappiello, Giovanni
Cabibihan, John-John
Laschi, Cecilia
Dario, Paolo
TI Design and development of five-fingered hands for a humanoid emotion
expression robot
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE humanoid robotics; personal robotics; robotic hand; human-robot
interaction; emotion expression
ID PROSTHESIS
AB Among social infrastructure technologies, Robot Technology (RT) is expected to
play an important role in solving the problems of the aging society. New
generations of personal robots are expected to be capable of assisting humans in a
variety of contexts and thus of interacting and communicating with them effectively
and even in a friendly and natural way. Expressing human-like emotions is an
important capability to this aim. The objectives of this work are the design and
development of two five-fingered robotic hands for a humanoid upper body able to
generate and express emotions. The specific design goals have been grasping and
expression of emotions through hand gestures, as a complement to facial expression
of emotions. The paper presents the design process of the robotic hands, named RCH-
1 (Robocasa Hand No. 1), starting from the requirements deriving from their use in
grasping and gestures. The resulting robotic hands are described in detail,
together with the hand sensory systems. Experimental trials are then presented,
aimed at assessing the hand performance and at validating their effectiveness in
grasping and emotion expression, when mounted on the emotion expression humanoid
robot WEAR (Waseda Eye No. 4 Refined).
C1 Scuola Super Sant Anna, ARTS Lab, I-56127 Pisa, Italy.
Kogakuin Univ, Dept Mech Syst Engn, Shinjuku Ku, Tokyo, Japan.
Waseda Univ, Takanishi Lab, Shinjuku Ku, Tokyo 1698555, Japan.
RP Roccella, S (reprint author), Scuola Super Sant Anna, ARTS Lab, Piazza Martiri
Liberta 33, I-56127 Pisa, Italy.
EM s.roccella@arts.sssup.it; c.carrozza@arts.sssup.it;
g.cappiello@arts.sssup.it; j.cabibihan@arts.sssup.it;
c.laschi@arts.sssup.it; p.dario@arts.sssup.it
OI Cabibihan, John-John/0000-0001-5892-743X; Laschi,
Cecilia/0000-0001-5248-1043
CR Adams B, 2000, IEEE INTELL SYST APP, V15, P25, DOI 10.1109/5254.867909
AGNEW PJ, 1981, PROSTHET ORTHOT INT, V5, P92
BEKEY GA, 1990, DEXTROUS ROBOT HANDS, P136
Bicchi A, 2000, IEEE T ROBOTIC AUTOM, V16, P652, DOI 10.1109/70.897777
Breazeal C, 2002, DESIGNING SOCIABLE R
Butterfass J, 2001, IEEE INT CONF ROBOT, P109, DOI 10.1109/ROBOT.2001.932538
CAFFAZ A, 1998, P INT C ROB AUT, V3, P16
Carrozza MC, 2002, IEEE-ASME T MECH, V7, P108, DOI 10.1109/TMECH.2002.1011247
Chao E., 1989, BIOMECHANICS HAND
Cupo ME, 1998, J REHABIL RES DEV, V35, P431
CUTKOSKYY MR, 1985, ROBOTIC GRASPING FIN
DAIRO P, 2003, IEEE INT C HUM GERM
Ekman P, 2003, ANN NY ACAD SCI, V1000, P1, DOI 10.1196/annals.1280.002
Ekman P., 1999, HDB COGNITION EMOTIO, P301, DOI DOI 10.1002/0470013494.CH16
Ekman P., 1999, HDB COGNITION EMOTIO, P45, DOI DOI 10.1002/0470013494.CH3
*EU DG EMP SOC AFF, 2004, SOC SIT EUR UN 2004
HWANG BW, 2005, P 1 INT C INT COMP H, P611
IBERALL T, 1993, P 1993 IEEE RSJ INT, P824
Infantino I, 2005, IEEE T SYST MAN CY C, V35, P42, DOI 10.1109/TSMCC.2004.840043
JACOBSEN SC, 1984, INT J ROBOT RES, V3, P21, DOI 10.1177/027836498400300402
JAFFE DL, 1994, J REHABIL RES DEV, V31, P236
JOHANSSON RS, 1980, BRAIN RES, V184, P353, DOI 10.1016/0006-8993(80)90804-5
KJAPANDJI A, 1982, PHYSL JOINTS UPPER L, V1
Kyberd P. J., 1995, IEEE Transactions on Rehabilitation Engineering, V3, P70,
DOI 10.1109/86.372895
LOVCHIK CS, 1999, P IEEE INT C ROB AUT, P10
Massa B, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P3374, DOI 10.1109/ROBOT.2002.1014232
Massa B., 2002, IEEE INT C ROB AUT I, P3374
METTA G, 2000, SAB 2000, P11
Miwa H, 2003, IEEE INT CONF ROBOT, P3588
MIWA H, 2004, IEEE RSJ INT C INT R
Miwa H., 2002, IEEE RSJ INT C INT R, P2443
National Institute of Population and Social Security Research Department of
Information Collection and Analysis, 2003, POP STAT JAP 2003
Park Ill-Woo, 2005, P IEEE RAS INT C HUM, P321
Sakagami Y., 2002, IEEE RSJ INT C INT R, P2478, DOI DOI
10.1109/IRDS.2002.1041641
Salisbury J., 1982, INT J ROBOT RES, V1, P4, DOI [10.1177/027836498200100102,
DOI 10.1177/027836498200100102]
SILCOX DH, 1993, J BONE JOINT SURG AM, V75A, P1781, DOI 10.2106/00004623-
199312000-00007
TAKANISHI A, 2004, INT ROBOT FAIR 2004
TAKANISHI A, 2000, IEEE INT C ROB AUT 2, P2243
Zecca M, 2002, Crit Rev Biomed Eng, V30, P459, DOI
10.1615/CritRevBiomedEng.v30.i456.80
ZECCA M, 2003, INT J HUMAN FRIENDLY, V4, P2
GOW DJ, Patent No. 5888246
NR 41
TC 16
Z9 16
U1 1
U2 6
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2007
VL 4
IS 1
BP 181
EP 206
DI 10.1142/S0219843607000996
PG 26
WC Robotics
SC Robotics
GA 214SY
UT WOS:000249759600007
DA 2018-01-22
ER

PT J
AU Harada, K
Kajita, S
Kanehiro, F
Fujiwara, K
Kaneko, K
Yokoi, K
Hirukawa, H
AF Harada, Kensuke
Kajita, Shuuji
Kanehiro, Furnio
Fujiwara, Kiyoshi
Kaneko, Kenji
Yokoi, Kazuhito
Hirukawa, Hirohisa
TI Real-time planning of humanoid robot's gait for force-controlled
manipulation
SO IEEE-ASME TRANSACTIONS ON MECHATRONICS
LA English
DT Article
DE biped gait; force control; gait planning; humanoid robot; manipulation;
pushing; real time
AB This paper proposes a new style of manipulation by a humanoid robot. Focusing on
the task of pushing an object, the foot placement is planned in real time according
to the result of manipulation of the object. By using the impedance control of the
arms, the humanoid robot can stably push the object regardless of the mass of the
object. If the object is heavy, the humanoid robot pushes it by walking slowly and
vice versa. Also, for planning the gait in real time, we propose a new analytical
method where a newly calculated trajectory of the robot motion is smoothly
connected to the current one. The effectiveness of the proposed method is confirmed
by simulation and experiment.
C1 AIST, Tsukuba, Ibaraki 3058568, Japan.
RP Harada, K (reprint author), AIST, Tsukuba, Ibaraki 3058568, Japan.
EM kensuke.harada@aist.go.jp; s.kajita@aist.go.jp; f-kanehiro@aist.go.jp;
k-fujiwara@aist.go.jp; k.kaneko@aist.go.jp; kazuhito.yokoi@aist.go.jp;
hiro.hirukawa@aist.go.jp
RI Yokoi, Kazuhito/K-2046-2012; Hirukawa, Hirohisa/B-4209-2017; KANEKO,
Kenji/M-5360-2016; Kajita, Shuuji/M-5010-2016; FUJIWARA,
Kiyoshi/N-7532-2016; Kanehiro, Fumio/L-8660-2016
OI Yokoi, Kazuhito/0000-0003-3942-2027; Hirukawa,
Hirohisa/0000-0001-5779-011X; KANEKO, Kenji/0000-0002-1888-8787; Kajita,
Shuuji/0000-0001-8188-2209; FUJIWARA, Kiyoshi/0000-0003-2535-7922;
Kanehiro, Fumio/0000-0002-0277-3467
CR Bernheisel JD, 2004, IEEE T AUTOM SCI ENG, V1, P163, DOI
10.1109/TASE.2004.835575
Bernheisel JD, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P3180
Harada K, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P2485, DOI 10.1109/ROBOT.2002.1013605
HARADA K, 2003, P IEEE RSJ INT C INT, P73
HARADA K, 2003, P IEEE INT C ROB AUT, P1627
Hwang Y, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1901
INOUE H, 2000, P 1 IEEE RAS INT C H
Inoue K., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International
Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065),
P2217, DOI 10.1109/ROBOT.2000.846357
Jia YB, 1996, IEEE INT CONF ROBOT, P165, DOI 10.1109/ROBOT.1996.503590
Kajita S, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1644
Kajita S, 2001, IEEE INT CONF ROBOT, P2299, DOI 10.1109/ROBOT.2001.932965
KAJITA S, 1992, IEEE T ROBOTIC AUTOM, V8, P431, DOI 10.1109/70.149940
KANEHIRO F, 2001, P INT S ROB RES LORN
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
KURAZUME R, 2003, P IEEE INT C ROB AUT, P925
LIM H, 2002, P IEEE INT C ROB AUT, P3111
Lynch KM, 1996, INT J ROBOT RES, V15, P533, DOI 10.1177/027836499601500602
LYNCH KM, 1992, 1992 IEEE INTERNATIONAL CONF ON ROBOTICS AND AUTOMATION :
PROCEEDINGS, VOLS 1-3, P2269, DOI 10.1109/ROBOT.1992.219921
Mason MT, 1985, ROBOT HANDS MECH MAN
MASON MT, 1986, P 6 S THEOR PRACT RO, P321
Nishihama Y, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1914
Nishiwaki K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2684, DOI 10.1109/IRDS.2002.1041675
PESHKIN MA, 1988, IEEE T ROBOTIC AUTOM, V4, P569, DOI 10.1109/56.9297
Yokokohji Y, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS I-IV, PROCEEDINGS, P1856, DOI 10.1109/ROBOT.2002.1014811
Yokoyama K., 2003, P IEEE INT C ROB AUT, P2985
NR 25
TC 36
Z9 38
U1 0
U2 7
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1083-4435
EI 1941-014X
J9 IEEE-ASME T MECH
JI IEEE-ASME Trans. Mechatron.
PD FEB
PY 2007
VL 12
IS 1
BP 53
EP 62
DI 10.1109/TMECH.2006.886254
PG 10
WC Automation & Control Systems; Engineering, Manufacturing; Engineering,
Electrical & Electronic; Engineering, Mechanical
SC Automation & Control Systems; Engineering
GA 136IR
UT WOS:000244216300006
DA 2018-01-22
ER

PT J
AU Hirose, M
Ogawa, K
AF Hirose, Masato
Ogawa, Kenichi
TI Honda humanoid robots development
SO PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL
AND ENGINEERING SCIENCES
LA English
DT Article
DE humanoid robots; biped walking; applications
AB Honda has been doing research on robotics since 1986 with a focus upon bipedal
walking technology. The research started with straight and static walking of the
first prototype two-legged robot. Now, the continuous transition from walking in a
straight line to making a turn has been achieved with the latest humanoid robot
ASIMO. ASIMO is the most advanced robot of Honda so far in the mechanism and the
control system. ASIMO's configuration allows it to operate freely in the human
living space. It could be of practical help to humans with its ability of five-
finger arms as well as its walking function. The target of further development of
ASIMO is to develop a robot to improve life in human society. Much development work
will be continued both mechanically and electronically, staying true to Honda's
'challenging spirit'.
C1 Honda Res & Dev Co Ltd, Wako Res Ctr, Wako, Saitama 3510114, Japan.
Honda Engn Co Ltd, Wako Res Ctr, Wako, Saitama 3510114, Japan.
RP Hirose, M (reprint author), Honda Res & Dev Co Ltd, Wako Res Ctr, 8-1 Honcho,
Wako, Saitama 3510114, Japan.
EM masato_hirose@n.w.rd.honda.co.jp
NR 0
TC 134
Z9 139
U1 4
U2 55
PU ROYAL SOC
PI LONDON
PA 6-9 CARLTON HOUSE TERRACE, LONDON SW1Y 5AG, ENGLAND
SN 1364-503X
J9 PHILOS T R SOC A
JI Philos. Trans. R. Soc. A-Math. Phys. Eng. Sci.
PD JAN 15
PY 2007
VL 365
IS 1850
BP 11
EP 19
DI 10.1098/rsta.2006.1917
PG 9
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA 115ZG
UT WOS:000242771700002
PM 17148047
OA gold
DA 2018-01-22
ER

PT J
AU Fujita, M
AF Fujita, M.
TI How to make an autonomous robot as a partner with humans: design
approach versus emergent approach
SO PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL
AND ENGINEERING SCIENCES
LA English
DT Article
DE entertainment robot; pet-type robot; humanoid; intelligence dynamics
ID BEHAVIOR; REPRESENTATION
AB In this paper, we discuss what factors are important to realize an autonomous
robot as a partner with humans. We believe that it is important to interact with
people without boring them, using verbal and non-verbal communication channels. We
have already developed autonomous robots such as AIBO and QRIO, whose behaviours
are manually programmed and designed. We realized, however, that this design
approach has limitations; therefore we propose a new approach, intelligence
dynamics, where interacting in a real-world environment using embodiment is
considered very important. There are pioneering works related to this approach from
brain science, cognitive science, robotics and artificial intelligence. We assert
that it is important to study the emergence of entire sets of autonomous behaviours
and present our approach towards this goal.
C1 Sony Corp, Informat Technol Labs, Shinagawa Ku, Tokyo 1410001, Japan.
RP Fujita, M (reprint author), Sony Corp, Informat Technol Labs, Shinagawa Ku, 6-7-
35 Kitashinagawa, Tokyo 1410001, Japan.
EM mfujita@pdp.crl.sony.co.jp
CR Arkin RC, 2003, ROBOT AUTON SYST, V42, P191, DOI 10.1016/S0921-8890(02)00375-5
Arkin RC, 1997, J EXP THEOR ARTIF IN, V9, P175, DOI 10.1080/095281397147068
Asada M, 2001, ROBOT AUTON SYST, V37, P185, DOI 10.1016/S0921-8890(01)00157-9
Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
BROOKS RA, 1991, ARTIF INTELL, V47, P139, DOI 10.1016/0004-3702(91)90053-M
BROOKS RA, 1986, J ROBOTICS AUTOMATIO, V2, P14
BROOKS RA, 1991, P INT JOINT C ART IN, V91, P569
CARNIAK E, 1985, INTRO ARTIFICIAL INT
CSIKSZENTMIHALYI M., 1990, FLOW PSYCHOL OPTIMAL
Ekman P., 1994, NATURE EMOTION
Fujita M, 2001, INT J ROBOT RES, V20, P781, DOI 10.1177/02783640122068092
Fujita M, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P960
Fujita M, 2001, ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, P383, DOI
10.1109/ROMAN.2001.981934
Fujita M, 1999, ROBOT AUTON SYST, V29, P119, DOI 10.1016/S0921-8890(99)00047-0
Fujita M., 1997, Proceedings of the First International Conference on Autonomous
Agents, P435, DOI 10.1145/267658.267764
Fujita M, 1998, AUTON ROBOT, V5, P7, DOI 10.1023/A:1008856824126
Fujita M, 2003, IEEE ASME INT C ADV, P938
HARNAD S, 1990, PHYSICA D, V42, P335, DOI 10.1016/0167-2789(90)90087-6
Haruno M, 2001, NEURAL COMPUT, V13, P2201, DOI 10.1162/089976601750541778
HORNBY GS, 1999, P GEN EV COMP C, P1297
HOSHINO Y, 2004, P ICRA 2004, P4165
Inamura T, 2004, INT J ROBOT RES, V23, P363, DOI 10.1177/0278364904042199
Ishida T, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1079, DOI
10.1109/IROS.2001.976312
IWAHASHI N, 1999, INF PROCESS SOC JPN
KAPLAN F, 2000, P CELE 20 WORKSH INT
Kawato M, 1999, CURR OPIN NEUROBIOL, V9, P718, DOI 10.1016/S0959-4388(99)00028-8
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
MATARIC MJ, 1992, IEEE T ROBOTIC AUTOM, V8, P304, DOI 10.1109/70.143349
Osgood C., 1957, MEASUREMENT MEANING
Pfeifer R., 1999, UNDERSTANDING INTELL
Reed Edward S, 1997, SOUL MIND EMERGENCE
Rizzolatti G, 1996, COGNITIVE BRAIN RES, V3, P131, DOI 10.1016/0926-
6410(95)00038-0
ROY D, 1998, P INT C SPOK LANG PR
SABE KA, 2005, P INT WORKSH INT DYN
SAWADA T, 2004, P IEEE RSJ INT C INT, P2415
SAWADA T, 2004, P 4 IEEE RAS INT C H, V1, P450
Tanaka F., 2004, P IFIP INT C ENT COM, P499
Tani J, 2003, NEURAL NETWORKS, V16, P11, DOI 10.1016/S0893-6080(02)00214-9
VARELA F., 1991, EMBODIED MIND
NR 39
TC 6
Z9 6
U1 0
U2 3
PU ROYAL SOC
PI LONDON
PA 6-9 CARLTON HOUSE TERRACE, LONDON SW1Y 5AG, ENGLAND
SN 1364-503X
EI 1471-2962
J9 PHILOS T R SOC A
JI Philos. Trans. R. Soc. A-Math. Phys. Eng. Sci.
PD JAN 15
PY 2007
VL 365
IS 1850
BP 21
EP 47
DI 10.1098/rsta.2006.1923
PG 27
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA 115ZG
UT WOS:000242771700003
PM 17148048
OA gold
DA 2018-01-22
ER

PT J
AU Lim, HO
Takanishi, A
AF Lim, Hun-Ok
Takanishi, Atsuo
TI Biped walking robots created at Waseda University: WL and WABIAN family
SO PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL
AND ENGINEERING SCIENCES
LA English
DT Article
DE biped walking robot; stability; walking pattern; zero moment point
compensatory motion; parallel mechanism
AB This paper proposes the mechanism and control of the biped humanoid robots
WABIAN-RIV and WL-16. WABIAN-RIV has 43 mechanical degrees of freedom (d.f.): 6
d.f. in each leg, 7 d.f. in each arm, 3 d.f. in each hand, 2 d.f. in each eye, 4
d.f. in the neck and 3 d.f. in the waist. Its height is about 1.89 m and its total
weight is 127 kg. It has a vision system and a voice recognition system to mimic
some of the capabilities of the human senses. WL-16 consists of a pelvis and two
legs having six 1 d.f. active linear actuators. An aluminium chair is mounted on
two sets of its telescopic poles. To reduce the large support forces during the
support phase, a support torque reduction mechanism is developed, which is composed
of two compression gas springs with different stiffness. For the stability of the
robots, a compensatory motion control algorithm is developed. This control
compensates for moments generated by the motion of the lower limbs, using the
motion of the trunk and the waist that is obtained by the zero moment point concept
and fast Fourier transform. WABIAN-RIV is able to walk forwards, backwards and
sideways, dance, carry heavy goods and express emotion, etc.WL-16 can move
forwards, backwards and sideways while carrying an adult weighing up to 60 kg.
C1 Kanagawa Univ, Fac Engn, Dept Mech Engn, Kanagawa Ku, Yokohama, Kanagawa
2218686, Japan.
Waseda Univ, Humanoid Robot Inst, Dept Mech Engn, Shinjuku Ku, Tokyo 1698555,
Japan.
RP Lim, HO (reprint author), Kanagawa Univ, Fac Engn, Dept Mech Engn, Kanagawa Ku,
3-27-1 Rokkakubashi, Yokohama, Kanagawa 2218686, Japan.
EM holim@ieee.org
CR FURUTA T, 2000, CD ROM P IEEE RAS IN
HIROSE M, 2001, P ADV SCI I 2001, P1
Ishida T, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1079, DOI
10.1109/IROS.2001.976312
KAGAMI S, 2001, P 2001 IEEE RAS INT, P253
KANEHIRO F, 2003, P IEEE INT C ROB AUT, P1633
Kato I., 1973, P CISM IFTOMM S THEO, P12
Li Q., 1992, Proceedings. Sixth International Parallel Processing Symposium
(Cat. No.92TH0419-2), P597, DOI 10.1109/IPPS.1992.223000
Lim H, 2000, 2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS, P1334, DOI 10.1109/IROS.2000.893206
LIM H, 2004, INT J HUM FRIENDL WE, V5, P26
LIM H, 2000, P IEEE RSJ INT C INT, P191
LIM H, 2001, P INT S ROB SEOUL KO, P1551
LIM H, 2002, INT J HUM FRIENDLY W, V5, P26
Lim HO, 2004, ADV ROBOTICS, V18, P415, DOI 10.1163/156855304773822491
Setiawan S, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P361, DOI 10.1109/ROBOT.1999.770005
Sugahara Y, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2658, DOI 10.1109/IRDS.2002.1041671
Takanishi A., 1985, Proceedings of '85 International Conference on Advanced
Robotics, P459
Takanishi A., 1990, Proceedings. IROS '90. IEEE International Workshop on
Intelligent Robots and Systems '90. Towards a New Frontier of Applications (Cat.
No.90TH0332-7), P323, DOI 10.1109/IROS.1990.262408
TAKANISHI A, 1991, RSJ ANN C ROB SOC JA, P321
TAKANISHI A, 1988, P CISM IFTOMM S THEO, P68
YAMAGUCHI J, 1995, IEEE INT CONF ROBOT, P2892, DOI 10.1109/ROBOT.1995.525694
NR 20
TC 27
Z9 30
U1 3
U2 7
PU ROYAL SOC
PI LONDON
PA 6-9 CARLTON HOUSE TERRACE, LONDON SW1Y 5AG, ENGLAND
SN 1364-503X
EI 1471-2962
J9 PHILOS T R SOC A
JI Philos. Trans. R. Soc. A-Math. Phys. Eng. Sci.
PD JAN 15
PY 2007
VL 365
IS 1850
BP 49
EP 64
DI 10.1098/rsta.2006.1920
PG 16
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA 115ZG
UT WOS:000242771700004
PM 17148049
OA gold
DA 2018-01-22
ER

PT J
AU Hirukawa, H
AF Hirukawa, Hirohisa
TI Walking biped humanoids that perform manual labour
SO PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL
AND ENGINEERING SCIENCES
LA English
DT Article
DE humanoid robots; biped walking; applications
AB The Humanoid Robotics Project of the Ministry of Economy, Trade and Industry of
Japan realized that biped humanoid robots can perform manual labour. The project
developed humanoid robot platforms, consisting of humanoid robot hardware and a
package of fundamental software, and explored applications of humanoid robots on
them. The applications include maintenance tasks of industrial plants,
teleoperation of industrial vehicles, cooperative tasks with a human, guarding the
home and office and the care of patients in beds.
C1 Natl Inst Adv Ind Sci & Technol, Tsukuba, Ibaraki 3058568, Japan.
RP Hirukawa, H (reprint author), Natl Inst Adv Ind Sci & Technol, Tsukuba, Ibaraki
3058568, Japan.
EM hiro.hirukawa@aist.go.jp
RI Hirukawa, Hirohisa/B-4209-2017
OI Hirukawa, Hirohisa/0000-0001-5779-011X
CR FUJIWARA K, 2002, P IROS
INOUE H, 2001, P 2 IEEE RAS INT C H
INOUE H, 2001, P 32 ISR
INOUE H, 2000, P 1 IEEE RAS INT C H
Ishiwata Y., 1998, P 16 ANN C ROB SOC J, P355
Kajita S, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P239, DOI
10.1109/IROS.2001.973365
KANEHIRO F, 2003, P IEEE INT C ROB AUT
KANEHIRO F, 2002, P IEEE INT C ROB AUT
KANEKO K, 2002, P IEEE INT C ROB AUT
KANEKO K, 2002, P IEEE IROS
Oka T, 1996, IROS 96 - PROCEEDINGS OF THE 1996 IEEE/RSJ INTERNATIONAL CONFERENCE
ON INTELLIGENT ROBOTS AND SYSTEMS - ROBOTIC INTELLIGENCE INTERACTING WITH DYNAMIC
WORLDS, VOLS 1-3, P178, DOI 10.1109/IROS.1996.570657
Stasse O., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P81, DOI 10.1109/ROBOT.2000.844043
Sumi Y, 2002, INT J COMPUT VISION, V46, P5, DOI 10.1023/A:1013240031067
Yamane K., 1999, Proceedings 1999 IEEE International Conference on Robotics and
Automation (Cat. No.99CH36288C), P714, DOI 10.1109/ROBOT.1999.770059
YOKOI K, 2001, P IEEE RAS INT C HUM
NR 15
TC 5
Z9 5
U1 0
U2 1
PU ROYAL SOC
PI LONDON
PA 6-9 CARLTON HOUSE TERRACE, LONDON SW1Y 5AG, ENGLAND
SN 1364-503X
EI 1471-2962
J9 PHILOS T R SOC A
JI Philos. Trans. R. Soc. A-Math. Phys. Eng. Sci.
PD JAN 15
PY 2007
VL 365
IS 1850
BP 65
EP 77
DI 10.1098/rsta.2006.1916
PG 13
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA 115ZG
UT WOS:000242771700005
PM 17148050
OA gold
DA 2018-01-22
ER

PT J
AU Nishiwaki, K
Kuffner, J
Kagami, S
Inaba, M
Inoue, H
AF Nishiwaki, Koichi
Kuffner, James
Kagami, Satoshi
Inaba, Masayuki
Inoue, Hirochika
TI The experimental humanoid robot H7: a research platform for autonomous
behaviour
SO PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL
AND ENGINEERING SCIENCES
LA English
DT Article
DE humanoid robot; autonomous behaviour; biped locomotion; motion planning;
vision-based control
ID CONFIGURATION-SPACES; COORDINATION
AB This paper gives an overview of the humanoid robot 'H7', which was developed
over several years as an experimental platform for walking, autonomous behaviour
and human interaction research at the University of Tokyo. H7 was designed to be a
human-sized robot capable of operating autonomously in indoor environments designed
for humans. The hardware is relatively simple to operate and conduct research on,
particularly with respect to the hierarchical design of its control architecture.
We describe the overall design goals and methodology, along with a summary of its
online walking capabilities, autonomous vision-based behaviours and automatic
motion planning. We show experimental results obtained by implementations running
within a simulation environment as well as on the actual robot hardware.
C1 AIST, Digital Human Res Ctr, Koto Ku, Tokyo 1360064, Japan.
Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA.
Univ Tokyo, Grad Sch Informat Sci & Technol, Bunkyo Ku, Tokyo 1138856, Japan.
RP Nishiwaki, K (reprint author), AIST, Digital Human Res Ctr, Koto Ku, 2-41-
6,Aomi, Tokyo 1360064, Japan.
EM k.nishiwaki@aist.go.jp
RI Kagami, Satoshi/A-6841-2013
CR Barabanov M., 1997, THESIS NEW MEXICO I
BARRAQUAND J, 1991, INT J ROBOT RES, V10, P628, DOI 10.1177/027836499101000604
Bohlin R., 2000, P IEEE INT C ROB AUT
BOLLES R, 1993, ROBOTICS RES
Yang L, 2001, P IEEE INT C ROB AUT
Chestnutt J., 2003, P IEEE INT C HUM ROB
Curless B., 1996, ACM T GRAPHIC, V30, P303, DOI DOI 10.1145/237170.237269
Faugeras O., 1993, 2013 INRIA
FLASH T, 1985, J NEUROSCI, V5, P1688
Fua P., 1993, Machine Vision and Applications, V6, P35, DOI 10.1007/BF01212430
HORN BKP, 1987, J OPT SOC AM A, V4, P629, DOI 10.1364/JOSAA.4.000629
HSU D, 1997, INT J COMPUT GEOM AP, V9, P495
HWANG YK, 1992, IEEE T ROBOTIC AUTOM, V8, P23, DOI 10.1109/70.127236
Kagami S., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automation. Symposia Proceedings (Cat.
No.00CH37065), P1441, DOI 10.1109/ROBOT.2000.844800
Kagami S., 2000, P INT WORKSH ALG FDN
Kavraki LE, 1996, IEEE T ROBOTIC AUTOM, V12, P566, DOI 10.1109/70.508439
Kuffner J, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P2265, DOI 10.1109/ROBOT.2002.1013569
Kuffner Jr J., 2001, P IEEE RSJ INT C INT
Kuffner JJ, 2002, AUTON ROBOT, V12, P105, DOI 10.1023/A:1013219111657
Kuffner Jr J. J., 2000, P IEEE INT C ROB AUT
KUFFNER JJ, 1999, THESIS STANFORD U ST
Latombe J.-C., 1991, ROBOT MOTION PLANNIN
LAVALLE S, 1999, P IEEE INT C RO AUT
LORENSEN WE, 1987, COMPUT GRAPH, V21, P163, DOI DOI 10.1145/37402.37422
Matsui T., 1990, Journal of Information Processing, V13, P327
Mazer E, 1998, J ARTIF INTELL RES, V9, P295
Mirtich B, 1998, ACM T GRAPHIC, V17, P177, DOI 10.1145/285857.285860
Nishiwaki K, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS I-IV, PROCEEDINGS, P2277, DOI 10.1109/ROBOT.2002.1013571
NISHIWAKI K, 2000, P IEEE RSJ INT C INT, V1, P88
Reif J. H., 1979, P IEEE S FDN COMP SC, P421
SAGAWA R, 2000, P IEEE RSJ INT C INT, V1, P88
Sanchez G, 2002, INT J ROBOT RES, V21, P5, DOI 10.1177/027836402320556458
Tsai R. Y., 1986, P IEEE C COMP VIS PA, P364
NR 33
TC 30
Z9 32
U1 0
U2 5
PU ROYAL SOC
PI LONDON
PA 6-9 CARLTON HOUSE TERRACE, LONDON SW1Y 5AG, ENGLAND
SN 1364-503X
EI 1471-2962
J9 PHILOS T R SOC A
JI Philos. Trans. R. Soc. A-Math. Phys. Eng. Sci.
PD JAN 15
PY 2007
VL 365
IS 1850
BP 79
EP 107
DI 10.1098/rsta.2006.1921
PG 29
WC Multidisciplinary Sciences
SC Science & Technology - Other Topics
GA 115ZG
UT WOS:000242771700006
PM 17148051
OA gold
DA 2018-01-22
ER

PT J
AU Cheng, G
Hyon, SH
Morimoto, J
Ude, A
Hale, JG
Colvin, G
Scroggin, W
Jacobsen, SC
AF Cheng, Gordon
Hyon, Sang-Ho
Morimoto, Jun
Ude, Ales
Hale, Joshua G.
Colvin, Glenn
Scroggin, Wayco
Jacobsen, Stephen C.
TI CB: a humanoid research platform for exploring neuroscience
SO ADVANCED ROBOTICS
LA English
DT Article
DE humanoid robotics; biped locomotion; humanoid interaction; vision
attention; biologically motivated vision; control architecture; dynamic
simulation; contact modeling
AB This paper presents a 50-d.o.f. humanoid robot, Computational Brain (CB). CB is
a humanoid robot created for exploring the underlying processing of the human brain
while dealing with the real world. We place our investigations within real-world
contexts, as humans do. In so doing, we focus on utilizing a system that is closer
to humans-in sensing, kinematics configuration and performance. We present the
real-time network-based architecture for the control of all 50 d.o.f. The
controller provides full position/velocity/force sensing and control at 1kHz,
allowing us the flexibility in deriving various forms of control. A dynamic
simulator is also presented; the simulator acts as a realistic testbed for our
controllers and acts as a common interface to our humanoid robots. A contact model
developed to allow better validation of our controllers prior to final testing on
the physical robot is also presented. Three aspects of the system are highlighted
in this paper: (i) physical power for walking, (ii) full-body compliant control-
physical interactions and (iii) perception and control-visual ocular-motor
responses.
C1 ATR Computat Neurosci Labs, Dept Humanoid Robot & Computat Neurosci, Kyoto
6190288, Japan.
JST, ICORP Computat Brain Project, Kawaguchi, Saitama 3320012, Japan.
SARCOS Res Corp, Salt Lake City, UT 84108 USA.
RP Cheng, G (reprint author), ATR Computat Neurosci Labs, Dept Humanoid Robot &
Computat Neurosci, Kyoto 6190288, Japan.
EM gordon@atr.jp
OI Cheng, Gordon/0000-0003-0770-8717; Ude, Ales/0000-0003-3677-3972
CR Arimoto S., 1996, CONTROL THEORY NONLI
ASFOUR T, 2005, EXPT ROBOTICS, V9, P259
Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
BROOKS RA, 1997, J ROBOTICS SOC JAPAN, V15, P968
Brooks RR, 1998, PLANTS THAT HYPERACCUMULATE HEAVY METALS: THEIR ROLE IN
PHYTOREMEDIATION, MICROBIOLOGY, ARCHAEOLOGY, MINERAL EXPLORATION AND PHYTOMINING,
P1
Cheng GD, 2001, ROBOT AUTON SYST, V37, P161, DOI 10.1016/S0921-8890(01)00156-7
ENDO G, 2005, P 20 NAT C ART INT P
GUMPP T, 2006, P IEEE RAS INT C HUM
HALE JG, 2006, OP EUR ACM SIGGRAPH, P27
Hirukawa H, 2004, ROBOT AUTON SYST, V48, P165, DOI 10.1016/j.robot.2004.07.007
HOHYON S, 2006, P IEEE RAS INT C HUM
ISHIGURO H, 2005, P 36 INT S ROB TOK
ISHIWATA Y, 1999, INTERFACE, P109
Itti L, 2001, NAT REV NEUROSCI, V2, P194, DOI 10.1038/35058500
KUNIYOSHI Y, 1997, P IEEE RSJ INT C INT, V2, P811
LOHMEIER S, 2004, P IEEE INT C ROB AUT, P4222
MATSUBARA T, 2005, P IEEE INT C ROB AUT
Morimoto J, 2006, IEEE INT CONF ROBOT, P1579, DOI 10.1109/ROBOT.2006.1641932
Nagakubo A, 2003, ADV ROBOTICS, V17, P149, DOI 10.1163/156855303321165105
Nagasaka K, 2004, IEEE INT CONF ROBOT, P3189, DOI 10.1109/ROBOT.2004.1308745
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Nishiwaki K., 2005, INT J HUM ROBOT, V2, P437
Ogura Y, 2006, IEEE INT CONF ROBOT, P76
PARK IW, 2005, P IEEE RAS INT C HUM
Park I. W., 2005, INT J HUM ROBOT, V2, P519
Sakagami Y, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2478, DOI 10.1109/IRDS.2002.1041641
SANDINI G, 2004, P IEEE INT C HUM ROB
Shimada M., 2006, P IEEE RAS INT C HUM, P157, DOI DOI 10.1109/ICHR.2006.321378
Tilley A.R., 2002, MEASURE MAN WOMAN HU
UDE A, 2005, P IEEE INT C HUM ROB
VANESSEN DC, 1994, CONCURRENT PROCESSIN
Winter DA, 1990, BIOMECHANICS MOTOR C
NR 32
TC 91
Z9 92
U1 0
U2 5
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2007
VL 21
IS 10
BP 1097
EP 1114
DI 10.1163/156855307781389356
PG 18
WC Robotics
SC Robotics
GA 205PV
UT WOS:000249128000002
DA 2018-01-22
ER

PT J
AU Stilman, M
Nishiwaki, K
Kagami, S
Kuffner, JJ
AF Stilman, Mike
Nishiwaki, Koichi
Kagami, Satoshi
Kuffner, James J.
TI Planning and executing navigation among movable obstacles
SO ADVANCED ROBOTICS
LA English
DT Article; Proceedings Paper
CT IEEE/RSJ International Conference on Intelligent Robots and Systems
CY OCT 09-13, 2006
CL Beijing, PEOPLES R CHINA
SP IEEE, RSJ
DE navigation among movable objects; NAMO; humanoid; manipulation; motion
planning; movable obstacles
AB This paper explores autonomous locomotion, reaching, grasping and manipulation
for the domain of navigation among movable obstacles (NAMO). The robot perceives
and constructs a model of an environment filled with various fixed and movable
obstacles, and automatically plans a navigation strategy to reach a desired goal
location. The planned strategy consists of a sequence of walking and compliant
manipulation operations. It is executed by the robot with online feedback. We give
an overview of our NAMO system, as well as provide details of the autonomous
planning, online grasping and compliant hand positioning during dynamically stable
walking. Finally, we present results of a successful implementation running on the
humanoid robot HRP-2.
C1 Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA.
Natl Inst Adv Ind Sci & Technol, Digital Human Res Ctr, Tokyo 1350064, Japan.
RP Stilman, M (reprint author), Carnegie Mellon Univ, Inst Robot, 5000 Forbes Ave,
Pittsburgh, PA 15213 USA.
EM robot@cmu.edu
RI Kagami, Satoshi/A-6841-2013
CR Alami R., 1994, P WORKSH ALG FDN ROB, P109
Ben-Shahar O, 1998, IEEE T ROBOTIC AUTOM, V14, P549, DOI 10.1109/70.704220
BENTLEY JL, 1982, COMMUN ACM, V25, P64, DOI 10.1145/358315.358392
CHEN PC, 1991, 1991 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS 1-3, P444, DOI 10.1109/ROBOT.1991.131618
CHESTNUTT J, 2003, P INT C HUM ROB MUN
FUKUMOTO Y, 2004, P INT C INT ROB SYST, P1186
GUTMANN J, 2005, P INT JOINT C ART IN, V19, P1232
HARADA K, 2003, P IEEE INT C ROB AUT, P1627
HARADA K, 2004, P IEEE INT C ROB AUT, P616
JEANNEROD M, 1995, TRENDS NEUROSCI, V18, P314, DOI 10.1016/0166-2236(95)93921-J
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Khatib O., 2004, INT J HUM ROBOT, V1, P29, DOI [10.1142/S0219843604000058, DOI
10.1142/S0219843604000058]
Kuffner J, 2001, IEEE INT CONF ROBOT, P692, DOI 10.1109/ROBOT.2001.932631
MILLER AT, 2001, THESIS COLUMBIA U
NIEUWENHUISEN D, 2006, P WORKSH ALG FDN ROB
Nishiwaki K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2684, DOI 10.1109/IRDS.2002.1041675
OKADA K, 2004, P 2004 IEEE RSJ INT, P1174
OTA J, 2004, P IEEE INT C ROB AUT, P1962
Sciavicco L., 1996, MODELING CONTROL ROB
STILMAN M, 2004, P IEEE INT C HUM ROB, P322
STILMAN M, 2005, CMURITR0555
STILMAN M, 2006, P WORKSH ALG FDN ROB
Stilman M, 2007, IEEE INT CONF ROBOT, P3327, DOI 10.1109/ROBOT.2007.363986
Takubo T, 2005, P 2005 IEEE INT C RO, P1718
WHITNEY DE, 1969, IEEE T MAN MACHINE, VMM10, P47, DOI 10.1109/TMMS.1969.299896
Wilfong G., 1988, Proceedings of the Fourth Annual Symposium on Computational
Geometry, P279, DOI 10.1145/73393.73422
Wolf A., 2003, P IEEE RSJ INT C INT, V3, P2889
YOSHIDA E, 2005, P IEEE INT C ROB AUT, P1052
NR 28
TC 11
Z9 11
U1 0
U2 3
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2007
VL 21
IS 14
BP 1617
EP 1634
DI 10.1163/156855307782227408
PG 18
WC Robotics
SC Robotics
GA 226HD
UT WOS:000250577900003
DA 2018-01-22
ER

PT J
AU Inoue, Y
Tohge, T
Iba, H
AF Inoue, Yutaka
Tohge, Takahiro
Iba, Hitoshi
TI Cooperative transportation system for humanoid robots using
simulation-based learning
SO APPLIED SOFT COMPUTING
LA English
DT Article
DE humanoid robot; cooperative transportation; Q-learning; classifier
system
AB In this paper, an approach to the behavior acquisition required for humanoid
robots to carry out a cooperative transportation task is proposed. In the case of
object transportation involving two humanoid robots, mutual position shifts may
occur due to the body swinging of the robots. Therefore, it is necessary to correct
the position in real-time. Developing the position shift correction system requires
a great deal of effort. Solution to the problem of learning the required behaviors
is obtained by using the Classifier System and Q-Learning. The successful
cooperation of two HOAP-1 humanoid robots in the transportation task has been
confirmed by several experimental results. (C) 2005 Elsevier B.V. All rights
reserved.
C1 Univ Tokyo, Grad Sch Frontier Sci, Dept Frontier Informat, Bunkyo Ku, Tokyo
1138656, Japan.
RP Inoue, Y (reprint author), Univ Tokyo, Grad Sch Frontier Sci, Dept Frontier
Informat, Bunkyo Ku, 7-3-1 Hongo, Tokyo 1138656, Japan.
EM inoue@iba.k.u-tokyo.ac.jp; tohge@iba.k.u-tokyo.ac.jp;
iba@iba.k.u-tokyo.ac.jp
CR AlJarrah OM, 1997, IEEE INT CONF ROBOT, P895, DOI 10.1109/ROBOT.1997.620147
BOOKER LB, 1990, MACHINE LEARNING PAR
INOUE H, 2000, 1 IEEE RAS INT C HUM
Inoue Y., 2003, P HYBR INT SYST HIS2, P1124
Kosuge K, 1996, IROS 96 - PROCEEDINGS OF THE 1996 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS - ROBOTIC INTELLIGENCE INTERACTING
WITH DYNAMIC WORLDS, VOLS 1-3, P318, DOI 10.1109/IROS.1996.570694
Miyata N., 1997, Proceedings of the 1997 IEEE/RSJ International Conference on
Intelligent Robot and Systems. Innovative Robotics for Real-World Applications.
IROS '97 (Cat. No.97CH36108), P1754, DOI 10.1109/IROS.1997.656598
OSUMI H, 2000, P INT C MACH AUT ICM, P421
OTA J, 1996, J ROBOTICS SOC JAPAN, V14, P263
RAHMAN MM, 1999, 1999 IEEE INT C SYST, P676
Sutton RS, 1998, REINFORCEMENT LEARNI
WATKINS CJCH, 1992, MACH LEARN, V8, P279, DOI 10.1023/A:1022676722315
YOKOI K, 2002, P IARP INT WORKSH HU, P134
NR 12
TC 8
Z9 9
U1 0
U2 3
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 1568-4946
EI 1872-9681
J9 APPL SOFT COMPUT
JI Appl. Soft. Comput.
PD JAN
PY 2007
VL 7
IS 1
BP 115
EP 125
DI 10.1016/j.asoc.2005.05.001
PG 11
WC Computer Science, Artificial Intelligence; Computer Science,
Interdisciplinary Applications
SC Computer Science
GA 106TF
UT WOS:000242123500009
DA 2018-01-22
ER

PT J
AU Kanda, T
Kamasima, M
Imai, M
Ono, T
Sakamoto, D
Ishiguro, H
Anzai, Y
AF Kanda, Takayuki
Kamasima, Masayuki
Imai, Michita
Ono, Tetsuo
Sakamoto, Daisuke
Ishiguro, Hiroshi
Anzai, Yuichiro
TI A humanoid robot that pretends to listen to route guidance from a human
SO AUTONOMOUS ROBOTS
LA English
DT Article
DE human-robot interaction; embodied communication; cooperative body
movement; humanoid robot; communication robot
ID BEHAVIOR
AB This paper reports the findings for a humanoid robot that expresses its
listening attitude and understanding to humans by effectively using its body
properties in a route guidance situation. A human teaches a route to the robot, and
the developed robot behaves similar to a human listener by utilizing both temporal
and spatial cooperative behaviors to demonstrate that it is indeed listening to its
human counterpart. The robot's software consists of many communicative units and
rules for selecting appropriate communicative units. A communicative unit realizes
a particular cooperative behavior such as eye-contact and nodding, found through
previous research in HRI. The rules for selecting communicative units were
retrieved through our preliminary experiments with a WOZ method. An experiment was
conducted to verify the effectiveness of the robot, with the results revealing that
a robot displaying cooperative behavior received the highest subjective evaluation,
which is rather similar to a human listener. A detailed analysis showed that this
evaluation was mainly due to body movements as well as utterances. On the other
hand, subjects' utterance to the robot was encouraged by the robot's utterances but
not by its body movements.
C1 ATR, Intelligent Robot & Commun Labs, Kyoto, Japan.
Keio Univ, Dept Informat & Comp Sci, Fac Sci & Technol, Kanagawa, Japan.
Future Univ Hakodate, Dept Media Architecture, Sch Syst Informat Sci, Hakodate,
Hokkaido, Japan.
Osaka Univ, Grad Sch Engn, Dept Adapt Machine Syst, Osaka, Japan.
RP Kanda, T (reprint author), ATR, Intelligent Robot & Commun Labs, Kyoto, Japan.
RI Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825
CR Billard A, 2001, ROBOT AUTON SYST, V37, P145, DOI 10.1016/S0921-8890(01)00155-5
Breazeal C., 1999, P 16 INT JOINT C ART, P1146
Cassell J., 1999, P SIGCHI C HUM FACT, P520, DOI DOI 10.1145/302979.303150
Fujita M, 2001, INT J ROBOT RES, V20, P781, DOI 10.1177/02783640122068092
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Imai M, 2003, IEEE T IND ELECTRON, V50, P636, DOI 10.1109/TIE.2003.814769
JEBARA T, 1999, P INT C COMP VIS SYS
KANDA M, 1904, J COLL SCI IMP U TOK, V19, P1
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kanda T., 2003, P INT JOINT C ART IN, P177
Kidd CD, 2004, P IEEE RSJ INT C INT, P3559, DOI DOI 10.1109/IROS.2004.1389967
Kobayashi M, 2001, ADV ROBOTICS, V15, P327, DOI 10.1163/156855301300235931
MAYNARD SK, 1986, LINGUISTICS, V24, P1079, DOI 10.1515/ling.1986.24.6.1079
Miyashita T, 2004, ROBOT AUTON SYST, V48, P203, DOI
[10.1016/j.robot.2004.07.008, 10.1016/j.robots.2004.07.008]
Dunham P., 1995, JOINT ATTENTION ITS
NAKADAI K, 2001, INT JOINT C ART INT, P1425
Nakano Y.I., 2003, P 41 ANN M ASS COMP, P553
Ono T., 2001, P 23 ANN M COGN SCI, P732
Reeves B., 1996, MEDIA EQUATION
Sakagami Y, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2478, DOI 10.1109/IRDS.2002.1041641
Sakamoto D, 2005, INT J HUM-COMPUT ST, V62, P247, DOI
10.1016/j.ijhcs.2004.11.001
SCASSELLATI B, 2000, BIOROBOTICS
Trafton JG, 2005, IEEE T SYST MAN CY A, V35, P460, DOI 10.1109/TSMCA.2005.850592
TSUKAHARA S, 1997, P 61 ANN C JAP PSYCH, P134
Whittaker S, 1997, VIDEO MEDIATED COMMU, P23
NR 25
TC 31
Z9 31
U1 1
U2 8
PU SPRINGER
PI DORDRECHT
PA VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
SN 0929-5593
EI 1573-7527
J9 AUTON ROBOT
JI Auton. Robot.
PD JAN
PY 2007
VL 22
IS 1
BP 87
EP 100
DI 10.1007/s10514-006-9007-6
PG 14
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA 124DP
UT WOS:000243348000006
DA 2018-01-22
ER

PT J
AU Ishiguro, H
AF Ishiguro, Hiroshi
TI Scientific issues concerning androids
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article; Proceedings Paper
CT 12th International Symposium on Robotics Research (ISRR)
CY OCT 12-15, 2005
CL San Francisco, CA
DE android; android science; humanoid; appearance; cognitive study; uncanny
valley; synergt effect; total Turting test
AB In the development of humanoids, both the appearance and behavior of the robots
are significant issues. However, designing the robot's appearance, especially to
give it a humanoid one, was always a role of industrial designers. To tackle the
problem of appearance and behavior, two approaches are necessary: one from robotics
and the other from cognitive science. The approach from robotics tries to build
very humanlike robots based on knowledge from cognitive science. The approach from
cognitive science uses the robot to verify hypotheses for understanding humans.
This cross-interdisciplinary framework is called android science. This conceptual
paper introduces developed androids and states key issues in android science.
C1 Osaka Univ, Dept Adapt Machine Syst, Suita, Osaka 565, Japan.
RP Ishiguro, H (reprint author), Osaka Univ, Dept Adapt Machine Syst, Suita, Osaka
565, Japan.
EM ishiguro@ams.eng.osaka-u.ac.jp
CR Breazeal C., 1999, P 16 INT JOINT C ART, P1146
BROOKS RA, 1991, ARTIF INTELL, V47, P139, DOI 10.1016/0004-3702(91)90053-M
Chaminade T, 2001, BEHAV BRAIN SCI, V24, P879
Fujita M, 2001, INT J ROBOT RES, V20, P781, DOI 10.1177/02783640122068092
Hamad S, 1990, PHYSICA D, V42, P335, DOI DOI 10.1016/0167-2789(90)90087-6
Harnad S., 1991, Minds and Machines, V1, P43
Harnad S., 2000, Journal of Logic, Language and Information, V9, P425, DOI
10.1023/A:1008315308862
HASHIMOTO T, 2006, P IEEE INT C MECH AU
Hollan J., 2000, ACM Transactions on Computer-Human Interaction, V7, P174, DOI
10.1145/353485.353487
IKEDA T, 2004, P 13 IEEE INT WORKSH, P77
Ishiguro H., 2002, Proceedings of the First International Joint Conference on
Autonomous Agents and Multiagent Systems, P621, DOI 10.1145/544862.544863
Ishiguro H, 2001, IND ROBOT, V28, P498, DOI 10.1108/01439910110410051
ISHIGURO H, 2001, P INT JOINT C ART IN, P1375
ISHIGURO H, 1997, P INT JOINT C ART IN, P36
Itakura S, 2004, JPN PSYCHOL RES, V46, P216, DOI 10.1111/j.1468-
5584.2004.00253.x
ITAKURA S, 2004, INT C DEV LEARN
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kanda T, 2001, IEEE INT CONF ROBOT, P4166, DOI 10.1109/ROBOT.2001.933269
Kanda T., 2003, P INT JOINT C ART IN, P177
Matsusaka Y., 1999, P WORLD MULT SYST CY, V7, P450
MCCARTHY A, 2001, P ANN C AM PSYCH SOC
Mori M., 1970, ENERGY, V7, P33, DOI DOI 10.1109/MRA.2012.2192811
Nakadai K., 2001, P INT JOINT C ART IN, P1425
*NEC CO, PERS ROB PAPERO
Ono T., 2001, P 23 ANN M COGN SCI, P732
PERLIN K, 1995, IEEE T VIS COMPUT GR, V1, P5, DOI 10.1109/2945.468392
Sakagami Y, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2478, DOI 10.1109/IRDS.2002.1041641
Turing AM, 1950, MIND, V49, P433, DOI DOI 10.1093/MIND/LIX.236.433
WADA, 2002, P IEEE RSJ INT C INT, V2, P1152
NR 29
TC 38
Z9 38
U1 0
U2 8
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD JAN
PY 2007
VL 26
IS 1
BP 105
EP 117
DI 10.1177/0278364907074474
PG 13
WC Robotics
SC Robotics
GA 129LT
UT WOS:000243731700008
DA 2018-01-22
ER

PT J
AU Or, J
AF Or, J
TI A control system for a flexible spine belly-dancing humanoid
SO ARTIFICIAL LIFE
LA English
DT Article
DE next generation high-degree-of-freedom flexible spine humanoid robot;
belly dancing; lamprey central pattern generator; neural networks; robot
controller; motor primitives
ID BEHAVIOR; LOCOMOTION; ROBOTS; SNAKE
AB Recently, there has been a lot of interest in building anthropomorphic robots.
Research on humanoid robotics has focused on the control of manipulators and
walking machines. The contributions of the torso towards ordinary movements (such
as walking, dancing, attracting mares, and maintaining balance) have been neglected
by almost all humanoid robotic researchers. We believe that the next generation of
humanoid robots will incorporate a flexible spine in the torso. To meet the
challenge of controlling this kind of high-degree-of-freedom robot, 2 new control
architecture is necessary. Inspired by the rhythmic movements commonly exhibited in
lamprey locomotion as well as belly dancing, we designed a controller for a
simulated belly-dancing robot using the lamprey central pattern generator.
Experimental results show that the proposed lamprey central pattern generator
module could potentially generate plausible output patterns, which could be used
for all the possible spine motions with minimized control parameters. For instance,
in the case of planar spine motions, only three input parameters are required.
Using our controller, the simulated robot is able to perform complex torso
movements commonly seen in belly dancing as well. Our work suggests that the
proposed controller can potentially be a Suitable controller for a high-degree-of-
freedom, flexible spine humanoid robot. Furthermore, it allows us to gain a better
understanding of belly dancing by synthesis.
C1 Waseda Univ, Humanoid Robot Inst, Takanishi Lab, Shinjuku Ku, Tokyo 1698555,
Japan.
RP Or, J (reprint author), Waseda Univ, Humanoid Robot Inst, Takanishi Lab,
Shinjuku Ku, 59-308,3-4-1 Ookubo, Tokyo 1698555, Japan.
EM jimmyor@kurenai.waseda.jp
CR *ATEA, 2000, WHAT IS BELL DANC WH
Atkeson CG, 2000, IEEE INTELL SYST APP, V15, P46, DOI 10.1109/5254.867912
BILLARD A, 2001, ROBOTICS AUTONOMOUS, V941, P1
Breazeal C, 2002, DESIGNING SOCIABLE R
Brooks R. A., 1999, Computation for metaphors, analogy, and agents, P52, DOI
10.1007/3-540-48834-0_5
BURDICK JW, 1995, ADV ROBOTICS, V9, P195
DELCOMYN F, 1980, SCIENCE, V210, P492, DOI 10.1126/science.7423199
Delcomyn F., 1998, FDN NEUROBIOLOGY
EKEBERG O, 1993, BIOL CYBERN, V69, P363, DOI 10.1007/BF01185408
Fitt S. S., 1988, DANCE KINESIOLOGY
Grillner S, 1996, SCI AM, V274, P64, DOI 10.1038/scientificamerican0196-64
GRILLNER S, 1991, ANNU REV NEUROSCI, V14, P169, DOI
10.1146/annurev.neuro.14.1.169
GRILLNER S, 1991, NEUROBIOLOGICAL BASI, P77
Grillner S., 1981, HDB PHYSL 1, VI, P1179
HASHIMOTO S, 2000, WASEDA U HUMANOID RO
IIDA E, 1998, P 7 IEEE INT WORKSH, P481
Ijspeert AJ, 2001, BIOL CYBERN, V84, P331, DOI 10.1007/s004220000211
Ijspeert AJ, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS I-IV, PROCEEDINGS, P1398, DOI 10.1109/ROBOT.2002.1014739
Jenkins OC, 2002, P IEEE RSJ INT C INT, V3, P2551
Kandel E. R., 1985, PRINCIPLES NEURAL SC
Kyriakopoulos KJ, 1999, J ROBOTIC SYST, V16, P37, DOI 10.1002/(SICI)1097-
4563(199901)16:1<37::AID-ROB4>3.0.CO;2-V
Ma SG, 2001, ADV ROBOTICS, V15, P205
MATARIC MJ, 2002, IMITATION ANIMALS AR, P392
Miwa H, 2002, P 2002 IEEE RSJ INT, V2, P2443
Mizuuchi I., 2002, P IEEE RSJ INT C INT, V3, P2527
MIZUUCHI I, 2001, P 2001 IEEE RSJ INT, V3, P2099
Nakazawa A., 2002, P IEEE RSJ INT C INT, V3, P2539
NUGENT M, 2000, BELLYDANCE MOVEMENT
OR J, 2002, THESIS U EDINBURGH E
Sakagami Y., 2002, P IEEE RSJ INT C INT, V3, p2478 , DOI DOI
10.1109/IRDS.2002.1041641
Shan J., 2002, 20 ANN C ROB SOC JAP
Yamasaki F., 2002, AI MAG, V23, P60
NR 32
TC 9
Z9 9
U1 0
U2 3
PU MIT PRESS
PI CAMBRIDGE
PA ONE ROGERS ST, CAMBRIDGE, MA 02142-1209 USA
SN 1064-5462
EI 1530-9185
J9 ARTIF LIFE
JI Artif. Life
PD WIN
PY 2006
VL 12
IS 1
BP 63
EP 87
DI 10.1162/106454606775186464
PG 25
WC Computer Science, Artificial Intelligence; Computer Science, Theory &
Methods
SC Computer Science
GA 000TB
UT WOS:000234484200004
PM 16393451
DA 2018-01-22
ER

PT J
AU Sugiyama, O
Kanda, T
Imai, M
Ishiguro, H
Hagita, N
Anzai, Y
AF Sugiyama, Osamu
Kanda, Takayuki
Imai, Michita
Ishiguro, Hiroshi
Hagita, Norihiro
Anzai, Yuichiro
TI Humanlike conversation with gestures and verbal cues based on a
three-layer attention-drawing model
SO CONNECTION SCIENCE
LA English
DT Article
DE human-robot interface; human-robot interaction; deictic gestures
ID ROBOTS
AB When describing a physical object, we indicate which object by pointing and
using reference terms, such as 'this' and 'that', to inform the listener quickly of
an indicated object's location. Therefore, this research proposes using a three-
layer attention-drawing model for humanoid robots that incorporates such gestures
and verbal cues. The proposed three-layer model consists of three sub-models: the
Reference Term Model (RTM); the Limit Distance Model (LDM); and the Object Property
Model (OPM). The RTM selects an appropriate reference term for distance, based on a
quantitative analysis of human behaviour. The LDM decides whether to use a property
of the object, such as colour, as an additional term for distinguishing the object
from its neighbours. The OPM determines which property should be used for this
additional reference. Based on this concept, an attention-drawing system was
developed for a communication robot named 'Robovie', and its effectiveness was
tested.
C1 ATR Intelligent Robot & Commun Labs, Keihanna Sci City, Kyoto 6190288, Japan.
RP Sugiyama, O (reprint author), ATR Intelligent Robot & Commun Labs, 2-2-2
Hikaridai, Keihanna Sci City, Kyoto 6190288, Japan.
EM sugiyama@atr.jp; kanda@atr.jp
RI Kanda, Takayuki/I-5843-2016
OI Kanda, Takayuki/0000-0002-9546-5825
CR Breazeal C, 2000, ADAPT BEHAV, V8, P49, DOI 10.1177/105971230000800104
Breazeal C., 2005, P IROS, P383, DOI DOI 10.1109/IR0S.2005.1545011
GOETZ J, 2003, P 12 IEEE WORKSH ROB
HAASCH A, 2005, P IEEE RSJ INT C INT, P1499
HANAFIAH ZM, 2004, P 17 INT C PATT REC
Imai M, 2003, IEEE T IND ELECTRON, V50, P636, DOI 10.1109/TIE.2003.814769
IMAI M, 1999, P 16 INT JOINT C ART, V2, P1124
Inamura T., 2004, Systems and Computers in Japan, V35, P98, DOI
10.1002/scj.10034
Ishiguro H., 2005, COGSCI 2005 WORKSH, P1
Kamashima Masayuki, 2004, P IROS 04, P2506
Kanda T, 2004, P IEEE, V92, P1839, DOI 10.1109/JPROC.2004.835359
Kanda T., 2005, P IEEE RSJ INT C INT, P62
Kidd C, 2004, P IEEE RSJ INT C INT
Kozima H, 2001, ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, P377, DOI
10.1109/ROMAN.2001.981933
MacDorman KF, 2006, INTERACT STUD, V7, P361, DOI 10.1075/is.7.3.10mac
MacDorman KF, 2006, INTERACT STUD, V7, P297, DOI 10.1075/is.7.3.03mac
MacDorman K, 2006, P ICCS COGSCI 2006 L, P26
McNeill D., 1987, PSYCHOLINGUISTICS NE
MIZUNO T, 2003, IEEJ T EIS, V123, P2142
Dunham P., 1995, JOINT ATTENTION ITS
Nagai Y., 2005, P 2005 IEEE INT WORK, P217
Nakadai K., 2001, P INT JOINT C ART IN, P1425
Sakamoto D, 2005, INT J HUM-COMPUT ST, V62, P247, DOI
10.1016/j.ijhcs.2004.11.001
SCASSELLATI B, 2001, BIOROBOTICS
Shiomi M., 2006, P 1 ACM SIGCHI SIGAR, P305, DOI DOI 10.1145/1121241.1121293
Trafton JG, 2005, IEEE T SYST MAN CY A, V35, P460, DOI 10.1109/TSMCA.2005.850592
NR 26
TC 34
Z9 34
U1 2
U2 3
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0954-0091
EI 1360-0494
J9 CONNECT SCI
JI Connect. Sci.
PD DEC
PY 2006
VL 18
IS 4
BP 379
EP 402
DI 10.1080/09540090600890254
PG 24
WC Computer Science, Artificial Intelligence; Computer Science, Theory &
Methods
SC Computer Science
GA 117ZY
UT WOS:000242913000006
DA 2018-01-22
ER

PT J
AU Capi, G
Yokota, M
Mitobe, K
AF Capi, Genci
Yokota, Masao
Mitobe, Kazuhisa
TI Optimal multi-criteria humanoid robot gait synthesis - An evolutionary
approach
SO INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL
LA English
DT Article
DE humanoid robot; multiobjective evolutionary algorithm; gait synthesis
ID TORQUE-CHANGE MODEL; GENETIC ALGORITHMS; OPTIMIZATION; WALKING
AB Humanoid robots operating in everyday life environments must generate the gait
based on the environmental conditions. Often the gait has to satisfy different
objectives. In this paper, we present a new method for humanoid robot gait
generation based on multiobjective evolutionary algorithms. In our method, we
consider two different conflicting objectives for the humanoid robot gait
generation: minimum energy and minimum torque change. In the difference from single
objective genetic algorithm, the multiobjective evolutionary algorithm converges in
a set of nondominated Pareto optimal gaits. Based on the environmental conditions
and the user requirements, the appropriate humanoid robot gait can be selected.
Simulation and experimental results using the "Bonten-Maru" humanoid robot show a
good performance in the proposed method.
C1 Fukuoka Inst Technol, Fac Informat Engn, Higashi Ku, Fukuoka 8110195, Japan.
Yamagata Univ, Dept Mech Syst Engn, Yamagata 9928510, Japan.
RP Capi, G (reprint author), Fukuoka Inst Technol, Fac Informat Engn, Higashi Ku,
3-30-1 Wajiro Higashi, Fukuoka 8110195, Japan.
EM capi@fit.ac.jp
CR Capi G, 2003, ROBOT AUTON SYST, V42, P107, DOI 10.1016/S0921-8890(02)00351-2
Capi G, 2001, ADV ROBOTICS, V15, P675, DOI 10.1163/156855301317035197
Capi G., 2002, INFORMATION PROCESSI, V43, P1039
Channon PH, 1996, P I MECH ENG C-J MEC, V210, P177, DOI
10.1243/PIME_PROC_1996_210_184_02
Coello C. A. C., 1999, Knowledge and Information Systems, V1, P269
Herrera F, 1998, ARTIF INTELL REV, V12, P265, DOI 10.1023/A:1006504901164
Nakano E, 1999, J NEUROPHYSIOL, V81, P2140
ODAGIRI R, 1998, P IEEE INT C EV COMP, P348
Roussel L, 1998, IEEE INT CONF ROBOT, P2036, DOI 10.1109/ROBOT.1998.680615
Silva FM, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P59, DOI 10.1109/ROBOT.1999.769931
Takeda K, 2001, IND ROBOT, V28, P242, DOI 10.1108/01439910110389407
UNO Y, 1989, BIOL CYBERN, V61, P89
Vukobratovic M, 1990, BIPED LOCOMOTION DYN
NR 13
TC 18
Z9 20
U1 0
U2 1
PU ICIC INTERNATIONAL
PI KUMAMOTO
PA TOKAI UNIV, 9-1-1, TOROKU, KUMAMOTO, 862-8652, JAPAN
SN 1349-4198
EI 1349-418X
J9 INT J INNOV COMPUT I
JI Int. J. Innov. Comp. Inf. Control
PD DEC
PY 2006
VL 2
IS 6
BP 1249
EP 1258
PG 10
WC Computer Science, Artificial Intelligence
SC Computer Science
GA 116YY
UT WOS:000242840900005
DA 2018-01-22
ER

PT J
AU Matsubara, T
Morimoto, J
Nakanishi, J
Sato, MA
Doya, K
AF Matsubara, Takamitsu
Morimoto, Jun
Nakanishi, Jun
Sato, Masa-aki
Doya, Kenji
TI Learning CPG-based biped locomotion with a policy gradient method
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE reinforcement learning; policy gradient; biped locomotion; central
pattern generator
ID HUMANOID ROBOT; WALKING; ALGORITHMS
AB In this paper, we propose a learning framework for CPG-based biped locomotion
with a policy gradient method. We demonstrate that Appropriate sensory feedback to
adjust the rhythm of the CPG (Central Pattern Generator) can be learned using the
proposed method within a few hundred trials in simulations. We investigate linear
stability of a periodic orbit of the acquired walking pattern considering its
approximated return map. Furthermore, we apply the controllers acquired in
numerical simulations to our physical 5-link biped robot in order to empirically
evaluate the robustness of walking in the real environment. Experimental results
demonstrate that the robot was able to successfully walk using the acquired
controllers even in the cases of an environmental change by placing a seesaw-like
metal sheet on the ground and a parametric change of the robot dynamics with an
additional weight on a shank, which was not modeled in the numerical simulations.
(C) 2006 Elsevier B.V. All rights reserved.
C1 Nara Inst Sci & Technol, Nara 6300101, Japan.
JST, ICORP, Seika, Kyoto 6190288, Japan.
CNS, ATR, Seika, Kyoto 6190288, Japan.
Okinawa Inst Sci & Technol, Initial Res Project, Neural Computat Unit, Okinawa
9042234, Japan.
RP Matsubara, T (reprint author), Nara Inst Sci & Technol, 8916-5 Takayam Cho, Nara
6300101, Japan.
EM takam-m@atr.jp
RI Doya, Kenji/B-5841-2015
OI Doya, Kenji/0000-0002-2446-6820
CR ABERDEEN D, 2002, ICML, P3
Baxter J, 2001, J ARTIF INTELL RES, V15, P319
Doya K, 2000, NEURAL COMPUT, V12, P219, DOI 10.1162/089976600300015961
Endo G, 2004, IEEE INT CONF ROBOT, P3036, DOI 10.1109/ROBOT.2004.1307523
Fukuoka Y, 2003, INT J ROBOT RES, V22, P187, DOI 10.1177/0278364903022003004
Hase K, 1998, ANTHROPOL SCI, V106, P327, DOI 10.1537/ase.106.327
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Kagami S, 2002, AUTON ROBOT, V12, P71, DOI 10.1023/A:1013210909840
Kagami S, 2001, ALGORITHMIC AND COMPUTATIONAL ROBOTICS: NEW DIRECTIONS, P329
Kimura H, 2001, IEEE DECIS CONTR P, P411, DOI 10.1109/CDC.2001.980135
Kimura H, 1998, INT C MACH LEARN, P278
Konda VR, 2003, SIAM J CONTROL OPTIM, V42, P1143, DOI 10.1137/S0363012901385691
MATARIC MJ, 1994, MACH LEARN, P181
MATSUOKA K, 1985, BIOL CYBERN, V52, P367, DOI 10.1007/BF00449593
Morimoto J, 2001, ROBOT AUTON SYST, V36, P37, DOI 10.1016/S0921-8890(01)00113-0
Morimoto J, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1927
SATO M, 2002, INT C ART NEUR NETW, P777
SINGH S, 1994, MACH LEARN, P284
Strogatz S.H., 1994, NONLINEAR DYNAMICS C
Sutton RS, 2000, ADV NEUR IN, V12, P1057
Sutton RS, 1998, REINFORCEMENT LEARNI
TAGA G, 1991, BIOL CYBERN, V65, P147, DOI 10.1007/BF00198086
Tedrake R., 2004, P IEEE INT C INT ROB, P2849
TESAURO G, 1994, NEURAL COMPUT, V6, P215, DOI 10.1162/neco.1994.6.2.215
TUCHIYA C, 2004, 8 C INT AUT SYST, P281
WILLIAMS RJ, 1992, MACH LEARN, V8, P229, DOI 10.1007/BF00992696
NR 26
TC 41
Z9 41
U1 1
U2 15
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD NOV 30
PY 2006
VL 54
IS 11
SI SI
BP 911
EP 920
DI 10.1016/j.robot.2006.05.012
PG 10
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 110DS
UT WOS:000242359100004
DA 2018-01-22
ER

PT J
AU Nakanishi, M
Nomura, T
Sato, S
AF Nakanishi, Masao
Nomura, Taishin
Sato, Shunsuke
TI Stumbling with optimal phase reset during gait can prevent a humanoid
from falling
SO BIOLOGICAL CYBERNETICS
LA English
DT Article
ID LOCOMOTION; WALKING; STRATEGIES; OBSTACLES; MODEL
AB The human biped walking shows phase- dependent transient changes in gait
trajectory in response to external brief force perturbations. Such responses,
referred to as the stumbling reactions, are usually accompanied with phase reset of
the walking rhythm. Our previous studies provided evidence, based on a human gait
experiment and analyses of mathematical models of gait in the sagittal plane, that
an appropriate amount of phase reset in response to a perturbation depended on the
gait phase at the perturbation and could play an important role for preventing the
walker from a fall, thus increasing gait stability. In this paper, we provide a
further material that supports this evidence by a gait experiment on a biped
humanoid. In the experiment, the impulsive force perturbations were applied using
push-impacts by a pendulum-like hammer to the back of the robot during gait. The
responses of the external perturbations were managed by resetting the gait phase
with different delays or advancements. The results showed that appropriate amounts
of phase resetting contributed to the avoidance of falling against the perturbation
during the three-dimensional robot gait. A parallelism with human gait stumbling
reactions was discussed.
C1 Osaka Univ, Grad Sch Engn Sci, Div Bioengn, Toyonaka, Osaka 5608531, Japan.
Aino Univ, Osaka 5670012, Japan.
RP Nomura, T (reprint author), Osaka Univ, Grad Sch Engn Sci, Div Bioengn, 1-3
Machikaneyama, Toyonaka, Osaka 5608531, Japan.
EM taishin@bpe.es.osaka-u.ac.jp
RI Nomura, Taishin/J-8408-2012
OI Nomura, Taishin/0000-0002-6545-2097
CR Collins S, 2005, SCIENCE, V307, P1082, DOI 10.1126/science.1107799
Cordero AF, 2004, BIOL CYBERN, V91, P212, DOI 10.1007/s00422-004-0508-0
Dimitrijevic MR, 1998, ANN NY ACAD SCI, V860, P360, DOI 10.1111/j.1749-
6632.1998.tb09062.x
Forner Cordero A, 2003, GAIT POSTURE, V18, P47, DOI 10.1016/S0966-6362(02)00160-
1
FORSSBERG H, 1979, J NEUROPHYSIOL, V42, P936
Grillner S., 1981, HDB PHYSL 1, VI, P1179
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
KAWATO M, 1981, J MATH BIOL, V12, P13
KOBAYASHI M, 2000, T JPN SOC MED BIOL E, V38, P20
MCGEER T, 1990, INT J ROBOT RES, V9, P62, DOI 10.1177/027836499000900206
Nakanishi J, 2004, ROBOT AUTON SYST, V47, P79, DOI 10.1016/j.robot.2004.03.003
Schillings AM, 1999, BRAIN RES, V816, P480, DOI 10.1016/S0006-8993(98)01198-6
Schillings AM, 2000, J NEUROPHYSIOL, V83, P2093
TAGA G, 1995, BIOL CYBERN, V73, P97, DOI 10.1007/BF00204048
Tsuchiya K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1745
van der Linde RQ, 1999, BIOL CYBERN, V81, P227, DOI 10.1007/s004220050558
VUKOBRATOVIC M, 1990, BIPED LOCOMOTION
WINFREE A. T., 1980, GEOMETRY BIOL TIME
Yamasaki T, 2003, BIOSYSTEMS, V71, P221, DOI 10.1016/S0303-2647(03)00118-7
Yamasaki T, 2003, BIOL CYBERN, V88, P468, DOI 10.1007/s00422-003-0402-1
NR 20
TC 21
Z9 21
U1 0
U2 6
PU SPRINGER
PI NEW YORK
PA 233 SPRING ST, NEW YORK, NY 10013 USA
SN 0340-1200
EI 1432-0770
J9 BIOL CYBERN
JI Biol. Cybern.
PD NOV
PY 2006
VL 95
IS 5
BP 503
EP 515
DI 10.1007/s00422-006-0102-8
PG 13
WC Computer Science, Cybernetics; Neurosciences
SC Computer Science; Neurosciences & Neurology
GA 094RW
UT WOS:000241257700006
PM 16969676
DA 2018-01-22
ER

PT J
AU Hoshino, K
Kawabuchi, I
AF Hoshino, Kiyoshi
Kawabuchi, Ichiro
TI Mechanism of humanoid robot arm with 7 DOFs having pneumatic actuators
SO IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND
COMPUTER SCIENCES
LA English
DT Article
DE endoskeleton structure; humanoid robot arm; pneumatic actuator;
double-acting air cylinder
AB Pneumatic pressure, which is easy enough to be handled in comparison with
hydraulic pressure and is endowed with high safety, is available for a power source
of a robot arm to be utilized in concert with human beings to do various types of
work. But pneumatic pressure is so low in comparison with hydraulic pressure that
an air cylinder having a diameter long enough and stroke wide enough is required to
obtain great output power. In this study, therefore, the investigation was made
with layout of air cylinders and transmission mechanisms of the motion power
directed toward the driving joints to be followed by development of a new humanoid
robot arm with seven degrees of freedom in which air cylinders are compactly
incorporated. To be concrete with this, contrivance was made with an endoskeleton
structure allowing almost all of the structure materials of the individual arm
joints to be shared by the air cylinder with incorporation of the air cylinder in
the axes of the upper arm joint and forearm joints by paying attention to the fact
that the cylinder itself has high strength. The evaluation experiments driving the
robot arm referred to above were conducted by means of I-PD control. The results
suggested that the mechanism of the robot with seven degrees of freedom having
pneumatic actuators proposed in this study is useful as the humanoid robot arm. The
quick and accurate motions were accomplished with I-PD control which is relatively
easy to be dealt with but not suitable for non-linear actuator system.
C1 Univ Tsukuba, Tsukuba, Ibaraki 3058573, Japan.
Tech Experts Inc, Tokyo 1430015, Japan.
RP Hoshino, K (reprint author), Univ Tsukuba, Tsukuba, Ibaraki 3058573, Japan.
EM hoshino@esys.tsukuba.ac.jp
CR HOSHINO K, 2005, P 36 INT S ROB, V36, P1
JACOBSEN S, 1986, P 1986 IEEE INT C RO, P96
JACOBSEN SC, 1984, INT J ROBOT RES, V3, P21, DOI 10.1177/027836498400300402
Kawashima K., 2004, J ROBOTICS MECHATRON, V16, P8
SASAKI D, 2003, J ROBOTICS MECHATRON, V15, P534
NR 5
TC 7
Z9 7
U1 0
U2 5
PU IEICE-INST ELECTRONICS INFORMATION COMMUNICATIONS ENG
PI TOKYO
PA KIKAI-SHINKO-KAIKAN BLDG, 3-5-8, SHIBA-KOEN, MINATO-KU, TOKYO, 105-0011,
JAPAN
SN 0916-8508
EI 1745-1337
J9 IEICE T FUND ELECTR
JI IEICE Trans. Fundam. Electron. Commun. Comput. Sci.
PD NOV
PY 2006
VL E89A
IS 11
BP 3290
EP 3297
DI 10.1093/ietfec/e89-a.11.3290
PG 8
WC Computer Science, Hardware & Architecture; Computer Science, Information
Systems; Engineering, Electrical & Electronic
SC Computer Science; Engineering
GA 112EA
UT WOS:000242507100052
DA 2018-01-22
ER

PT J
AU Guan, YH
Neo, ES
Yokoi, K
Tanie, K
AF Guan, Yisheng
Neo, Ee Sian
Yokoi, Kazuhito
Tanie, Kazuo
TI Stepping over obstacles with humanoid robots
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article; Proceedings Paper
CT IEEE/RSJ International Conference on Intelligent Robots and Systems
CY AUG 02-06, 2005
CL Edmonton, CANADA
SP IEEE, Robot Soc Japan
DE feasibility analysis; humanoid robot; motion planning; obstacle
negotiation; stepping over obstacles
AB The wide potential applications of humanoid robots require that the robots can
walk in complex environments and overcome various obstacles. To this end, we
address the problem of humanoid robots stepping over obstacles in this paper. We
focus on two aspects, which are feasibility analysis and motion planning. The
former determines whether a robot can step over a given obstacle, and the latter
discusses how to step over, if feasible, by planning appropriate motions for the
robot. We systematically examine both of these aspects. In the feasibility
analysis, using an optimization technique, we cast the problem into global
optimization models with nonlinear constraints, including collision-free and
balance constraints. The solutions to the optimization models yield answers to the
possibility of stepping over obstacles under some assumptions. The presented
approach for feasibility provides not only a priori knowledge and a database to
implement stepping over obstacles, but also a tool to evaluate and compare the
mobility of humanoid robots. In motion planning, we present an algorithm to
generate suitable trajectories of the feet and the waist of the robot using
heuristic methodology, based on the results of the feasibility analysis. We
decompose the body motion of the robot into two parts, corresponding to the lower
body and upper body of the robot, to meet the collision-free and balance
constraints. This novel planning method is adaptive to obstacle sizes, and is,
hence, oriented to autonomous stepping over by humanoid robots guided by vision or
other range finders. Its effectiveness is verified by simulations and experiments
on our humanoid platform HRP-2.
C1 Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res Inst, Joint Japanese
French Robot Lab, Tsukuba 3058568, Japan.
Tokyo Metropolitan Univ, Dept Syst Design, Tokyo 1910065, Japan.
RP Guan, YH (reprint author), Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res
Inst, Joint Japanese French Robot Lab, Tsukuba 3058568, Japan.
EM y.guan@aist.go.jp; rio.neo@aist.go.jp; kazuhito.yokoi@aist.go.jp;
ktanie@cc.tmit.ac.jp
RI Yokoi, Kazuhito/K-2046-2012
OI Yokoi, Kazuhito/0000-0003-3942-2027
CR Bartels R. H., 1998, INTRO SPLINES USE CO
CHANNON PH, 1992, ROBOTICA, V10, P165, DOI 10.1017/S026357470000758X
Chestnutt J., 2003, P IEEE RAS INT C HUM
CUPEC R, 2003, P 12 INT WORKSH ROB
DENK J, 2003, P IEEE INT C ROB AUT, P1343
Gorce P, 1997, J INTELL ROBOT SYST, V18, P127, DOI 10.1023/A:1007990219631
Gorce P, 1998, 1998 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS - PROCEEDINGS, VOLS 1-3, P64, DOI 10.1109/IROS.1998.724598
Guan Y., 2005, P IEEE INT C ROB AUT, P1066
Guan Y., 2005, P IEEE INT C ROB BIO, P111
GUAN Y, 2000, P IEEE INT C ROB AUT, P3591
Guan YS, 2003, IEEE T ROBOTIC AUTOM, V19, P507, DOI 10.1109/TRA.2003.810235
HOLMSTROM K, 2004, USERS GUIDE TOMLAB 4
Huang Q, 2001, IEEE T ROBOTIC AUTOM, V17, P280, DOI 10.1109/70.938385
Kajita S, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1644
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
Kanehiro F, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P24, DOI 10.1109/ROBOT.2002.1013334
KANEKO K, 2004, P IEEE INT C ROB AUT, P1083
Kuffner JJ, 2002, AUTON ROBOT, V12, P105, DOI 10.1023/A:1013219111657
Lorch O, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P2484, DOI 10.1109/IRDS.2002.1041642
NEO ES, 2003, P IEEE INT C ROB AUT, P1613
O'Rourke J., 1993, COMPUTATIONAL GEOMET
Roussel L, 1998, IEEE INT CONF ROBOT, P2036, DOI 10.1109/ROBOT.1998.680615
SABE K, 2004, P IEEE INT C ROB AUT, P592
Sugihara T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1404, DOI 10.1109/ROBOT.2002.1014740
Yamane K., 2004, ACM T GRAPH P SIGGRA, P532
NR 25
TC 21
Z9 24
U1 1
U2 4
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD OCT
PY 2006
VL 22
IS 5
BP 958
EP 973
DI 10.1109/TRO.2006.878962
PG 16
WC Robotics
SC Robotics
GA 092ET
UT WOS:000241080500008
DA 2018-01-22
ER

PT J
AU Takenaka, T
AF Takenaka, Toru
TI The control system for the Honda humanoid robot
SO AGE AND AGEING
LA English
DT Article; Proceedings Paper
CT International Symposium on Preventing Falls and Fractures in Older
People
CY JUN, 2004
CL Yokohama, JAPAN
SP Int Soc Fracture Repair
DE humanoid; robot; walk; balance; control
AB To avoid tipping over either during walking or on standing up, humans will first
push down hard on the ground with a part of the sole of the foot. Then, when the
tipping force can no longer be resisted, a change in body position or an extra step
(stepping out) may be required to stabilise the posture. Our biped robot's control
system attempts to reproduce and execute the same postural control operations
carried out by humans. In this article, we present the history of robot development
at Honda, fundamental dynamics for robots and the principles of posture control.
C1 Honda Res & Dev Co Ltd, Wako Res Ctr, Wako, Saitama 3510114, Japan.
RP Takenaka, T (reprint author), Honda Res & Dev Co Ltd, Wako Res Ctr, 8-1 Honcho,
Wako, Saitama 3510114, Japan.
EM toru_takenaka@n.f.rd.honda.co.jp
CR HIRAI K, 1998, DEV HONDA HUMANOID R
VUKOBRATOVIC M, 1975, WALKING ROBOT HUMAN
VUKOBRATOVIC M, 1972, MATH BIOSCI, P15
NR 3
TC 11
Z9 12
U1 0
U2 5
PU OXFORD UNIV PRESS
PI OXFORD
PA GREAT CLARENDON ST, OXFORD OX2 6DP, ENGLAND
SN 0002-0729
J9 AGE AGEING
JI Age Ageing
PD SEP
PY 2006
VL 35
SU 2
BP 24
EP 26
DI 10.1093/ageing/afl080
PG 3
WC Geriatrics & Gerontology
SC Geriatrics & Gerontology
GA 085SQ
UT WOS:000240625100005
PM 16926199
OA gold
DA 2018-01-22
ER

PT J
AU Baek, SM
Tachibana, D
Arai, F
Fukuda, T
Matsuno, T
AF Baek, Seung-Min
Tachibana, Daisuke
Arai, Fumihito
Fukuda, Toshio
Matsuno, Takayuki
TI The task selection mechanism for interactive robots: Application to the
intelligent life supporting system
SO INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS
LA English
DT Article
ID HUMANOID ROBOTS; COMMUNICATION; IMITATION
AB The essential challenge in the future ubiquitous networks is to make information
available to people not only at any time, at any place, and in any form, but with
the right thing at the right time in the right way by inferring the users'
situations. Several psychological experiments show that there are some associations
between each user's situations including the user's emotions and each user's task
selection. Utilizing those results, this article presents a situation-based task
selection mechanism that enables a life-supporting robot system to perform tasks
based on the user's situation. Stimulated by interactions between the robot and the
user, this mechanism constructs and updates the association between the user's
situation and tasks so that the robot can adapt to the user's behaviors related to
the robot's tasks effectively. For the user adaptation, Radial Basis Function
Networks (RBFNs) and associative learning algorithms are used. The proposed
mechanism is applied to the CRF3 (Character robot face 3) system to prove its
feasibility and effectiveness. (c) 2006 Wiley Periodicals, Inc.
C1 Sungkyunkwan Univ, Suwon 440746, South Korea.
Toyoda Motor Corp, Toyota, Aichi 4718571, Japan.
Nagoya Univ, Chikusa Ku, Nagoya, Aichi 4648603, Japan.
RP Baek, SM (reprint author), Sungkyunkwan Univ, 300 Chunchun Dong, Suwon 440746,
South Korea.
EM smbaek@ece.skku.ac.kr; tatibana@robo.mein.nagoya-u.ac.jp;
arai@mein.nagoya-u.ac.jp; fukuda@mein.nagoya-u.ac.jp;
matsuno@robo.mein.nagoya-u.ac.jp
CR *AIST, 2006, ROB TECHN AIST
Andry P, 2001, IEEE T SYST MAN CY A, V31, P431, DOI 10.1109/3468.952717
ARAI F, 2003, P 2003 IEEE INT WORK
ASOH H, 1997, P INT JOINT C ART IN, P880
Atkeson C. G., 1997, P 14 INT C MACH LEAR, P12
Breazeal C, 2003, INT J HUM-COMPUT ST, V59, P119, DOI 10.1016/S1071-
5819(03)00018-1
Burgard W, 1999, ARTIF INTELL, V114, P3, DOI 10.1016/S0004-3702(99)00070-3
Dey A, 1999, GITGVU9922
Dey AK, 2001, PERS UBIQUIT COMPUT, V5, P4, DOI 10.1007/s007790170019
Edwards W. K., 2001, LNCS, V2201, P256
Ekman P., 1982, EMOTION HUMAN FACE, P178
Endres H, 1998, IEEE INT CONF ROBOT, P1779, DOI 10.1109/ROBOT.1998.677424
Fukuda T, 2004, P IEEE, V92, P1851, DOI 10.1109/JPROC.2004.835355
Hagan M. T., 1995, NEURAL NETWORK DESIG
*HOND MOT CO INC, 2005, ASIMO
JUNG MJ, 2003, P IEEE INT C ROB AUT, P250
Kanda T, 2004, HUM-COMPUT INTERACT, V19, P61, DOI 10.1207/s15327051hci1901&2_4
KANDA T, 2003, P 3 IEEE INT C HUM R
Kidd C. D., 1999, P 2 INT WORKSH COOP, P191, DOI DOI 10.1007/10705432_17
King S, 1990, P SPIE C MOB ROB BOS, V2352, P190
Kobayashi H, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P2215, DOI
10.1109/IROS.2001.976399
KOBAYASHI T, 2002, P IROS2002 SWISS FED, V3, P2286
KOZIMA H, 2002, SOCIALLY INTELLIGENT, P157
KUNIYOSHI Y, 1994, IEEE T ROBOTIC AUTOM, V10, P799, DOI 10.1109/70.338535
*MAN SCI TECHN CTR, 2003, HUM ROB PROJ
Miwa H, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P2443, DOI 10.1109/IRDS.2002.1041634
Moody J, 1989, NEURAL COMPUT, V1, P281, DOI 10.1162/neco.1989.1.2.281
*NASA JOHNS SPAC C, 2006, ROBONAUT
NGAI Y, 2003, P IEEE RSJ INT C INT, P168
Nicolescu MN, 2001, IEEE T SYST MAN CY A, V31, P419, DOI 10.1109/3468.952716
Okuno HG, 2001, IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1402, DOI
10.1109/IROS.2001.977177
Pollack ME, 2003, ROBOT AUTON SYST, V44, P273, DOI 10.1016/S0921-8890(03)00077-0
POLLACK ME, 2002, P 7 INT C INT AUT SY
Pollack M., 2002, P AAAI WORKSH AUT EL, P85
SAHA D, 2003, IEEE COMPUT, V36, P25
Sakamura K., 1994, Proceedings of the 11th TRON Project International Symposium
(Cat. No.94TH8027), P146, DOI 10.1109/TRON.1994.378606
Satyanarayanan M, 2001, IEEE PERS COMMUN, V8, P10, DOI 10.1109/98.943998
SCASSELLATI B, 2001, THESIS MIT CAMBRIDGE
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Schilit BN, 1994, IEEE WORKSH MOB COMP, P85
Schmidt A., 1999, P 1 INT S HANDH UB C, DOI [10.1007/3-540-48157-5_10, DOI
10.1007/3-540-48157-5_10]
Shibata T, 2001, IEEE INT CONF ROBOT, P2572, DOI 10.1109/ROBOT.2001.933010
SIMMONS R, 1998, P IROS 98 WORKSH WEB, P43
*SON CO, 2006, AIBO
Suchman L, 1987, PLANS SITUATED ACTIO
Terada K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1382
*U BONN COMP SCI D, 2006, RHINO
Want R, 1995, IEEE PERS COMMUN, V2, P28, DOI 10.1109/98.475986
WANT R, 1992, ACM T INFORM SYST, V10, P91, DOI 10.1145/128756.128759
NR 49
TC 0
Z9 0
U1 0
U2 4
PU WILEY-BLACKWELL
PI HOBOKEN
PA 111 RIVER ST, HOBOKEN 07030-5774, NJ USA
SN 0884-8173
EI 1098-111X
J9 INT J INTELL SYST
JI Int. J. Intell. Syst.
PD SEP
PY 2006
VL 21
IS 9
BP 973
EP 1004
DI 10.1002/int.20172
PG 32
WC Computer Science, Artificial Intelligence
SC Computer Science
GA 073WR
UT WOS:000239773900007
DA 2018-01-22
ER

PT J
AU Hotate, K
AF Hotate, Kazuo
TI Fiber sensor technology today
SO JAPANESE JOURNAL OF APPLIED PHYSICS PART 1-REGULAR PAPERS BRIEF
COMMUNICATIONS & REVIEW PAPERS
LA English
DT Article; Proceedings Paper
CT 11th Microoptics Conference
CY OCT 30-NOV 02, 2005
CL Tokyo, JAPAN
SP Japan Soc Appl Phys
DE fiber optic sensors; fiber-optic gyroscopes; distributed sensing;
fiber-optic nerve systems
ID OPTICAL COHERENCE FUNCTION; STRESS-LOCATION MEASUREMENT; CONTINUOUS-WAVE
TECHNIQUE; CORRELATION DOMAIN ANALYSIS; SPATIAL-RESOLUTION;
STRAIN-MEASUREMENT; MULTIPLEXING TECHNIQUE; MEASUREMENT RANGE; SENSING
SYSTEM; WAVELENGTH
AB Fiber sensor technologies are overviewed. Since the early 1970s, this field has
been developed, on the basis of the same devices and photonic principles as fiber
communication technologies. Besides simple configurations, in which the fiber acts
only as a data transmission line, sophisticated configurations have also been
developed, in which the fiber is used as a device to realize unique sensing
mechanisms. The fiber optic gyroscope (FOG) is a good example, and has been
developed example, for navigation and/or attitude control applications. Compared
with traditional absolute rotation sensor used, for exam spinning-mass gyroscopes,
the FOG has advantages, such as short warming-up time, a light weight, and easy
handling. A Japanese satellite, which was launched in August 2005 with a mission to
observe the aurora, is controlled with a FOG. The FOG has also been used in
consumer applications, such as the camera stabilizer, radio-controlled (RC)
helicopter navigation, and the control of humanoid robots. Recently, distributed
and multiplexed sensing schemes, in particular, have been studied and developed, in
which a long fiber acts like a "nerve" for feeling the strain and/or the
temperature distribution along the fiber. Performances of artificial nerve systems
have markedly improved within the last couple of years, in spatial resolution and
measurement speed. By embedding the "fiber-optic nerve system" in aircraft wings,
bridges and tall buildings, these materials and structures can sense damage to
prevent disasters.
C1 Univ Tokyo, Sch Engn, Dept Elect Engn, Bunkyo Ku, Tokyo 1138656, Japan.
RP Hotate, K (reprint author), Univ Tokyo, Sch Engn, Dept Elect Engn, Bunkyo Ku, 7-
3-1 Hongo, Tokyo 1138656, Japan.
EM hotate@sagnac.t.u-tokyo.ac.jp
CR Alahbabi MN, 2005, J OPT SOC AM B, V22, P1321, DOI 10.1364/JOSAB.22.001321
BAO X, 1993, OPT LETT, V18, P1561, DOI 10.1364/OL.18.001561
Childers B. A., 2002, Proceedings of the SPIE - The International Society for
Optical Engineering, V4578, P19, DOI 10.1117/12.456077
DAKIN J, 1997, OPTICAL FIBER SENSOR, V4
Enyama M, 2004, P SOC PHOTO-OPT INS, V5589, P144, DOI 10.1117/12.571357
Enyama M, 2005, MEAS SCI TECHNOL, V16, P977, DOI 10.1088/0957-0233/16/4/009
HE Z, 2005, P SOC PHOTO-OPT INS, V6004, P65
He ZY, 2002, J LIGHTWAVE TECHNOL, V20, P1715, DOI 10.1109/JLT.2002.802205
HORIGUCHI T, 1997, OPTICAL FIBER SENSOR, V4, P309
Hotate K, 2000, IEICE T ELECTRON, VE83C, P405
HOTATE K, 1980, ELECTRON LETT, V16, P941, DOI 10.1049/el:19800670
Hotate K, 2005, IEEE/LEOS Optical MEMs 2005: International Conference on Optical
MEMs and Their Applications, P5, DOI 10.1109/OMEMS.2005.1540048
Hotate K, 2005, P SOC PHOTO-OPT INS, V5855, P591, DOI 10.1117/12.624279
Hotate K, 2005, P SOC PHOTO-OPT INS, V5855, P62, DOI 10.1117/12.623429
Hotate K, 2005, JPN J APPL PHYS 2, V44, pL1030, DOI 10.1143/JJAP.44.L1030
Hotate K, 2003, P SOC PHOTO-OPT INS, V5272, P157, DOI 10.1117/12.514997
Hotate K, 1997, OPT FIBER TECHNOL, V3, P356, DOI 10.1006/ofte.1997.0230
Hotate K, 2004, IEEE PHOTONIC TECH L, V16, P578, DOI 10.1109/LPT.2003.821235
Hotate K, 2002, MEAS SCI TECHNOL, V13, P1746, DOI 10.1088/0957-0233/13/11/311
Hotate K, 2004, MEAS SCI TECHNOL, V15, P148, DOI 10.1088/0957-0233/15/1/021
Hotate K, 2003, IEEE PHOTONIC TECH L, V15, P272, DOI 10.1109/LPT.2002.806107
Hotate K, 2002, IEEE PHOTONIC TECH L, V14, P179, DOI 10.1109/68.980502
Hotate K, 2001, IEICE T ELECTRON, VE84C, P1823
Hotate K, 2001, IEEE PHOTONIC TECH L, V13, P233, DOI 10.1109/68.914331
HOTATE K, 2000, SENSORS UPDATE, V4, P131
Hotate K., 2005, P 17 INT C OPT FIB S, P184
HOTATE K, 2005, 2005 OPT AMPL THEIR
Hotate K., 1997, OPTICAL FIBER SENSOR, P167
HOTATE K, 2000, TRENDS OPTICAL NONDI, P487
Ishii H, 1996, IEEE J QUANTUM ELECT, V32, P433, DOI 10.1109/3.485394
JAROSZEWICZ LR, 2005, P 17 INT C OPT FIB S, P194
Kannou M., 2003, P 16 INT C OPT FIB S, P454
KERSEY AD, 1997, OPTICAL FIBER SENSOR, V4, P369
KUMAGAI T, 2002, P 15 INT C OPT FIB S, P35
Kurosawa K, 1997, OPT REV, V4, P38
Lefevre H., 1993, FIBER OPTIC GYROSCOP
NASU J, 2002, P 15 INT C OPT FIB S, P15
Nikles M, 1996, OPT LETT, V21, P758, DOI 10.1364/OL.21.000758
Ong S. S. L., 2003, P CLEO PAC RIM 2003
ONG SSL, 2003, 16 INT C OPT FIB SEN, P462
ONG SSL, 2003, 16 INT C OPT FIB SEN, P458
Saida T, 1997, IEEE PHOTONIC TECH L, V9, P484, DOI 10.1109/68.559396
Sanders G.A., 2002, P 15 OPT FIB SENS C, P31
Sanders S. J., 2002, P 15 INT C OPT FIB S, P5
Shlyagin M. G., 2002, Proceedings of the SPIE - The International Society for
Optical Engineering, V4578, P8, DOI 10.1117/12.456068
Song KY, 2006, IEEE PHOTONIC TECH L, V18, P499, DOI 10.1109/LPT.2005.863624
SONG KY, 2006, 2006 OFC NFOEC M AN
SONG KY, 2006, CLEO QELS 2006 LONG
Tanaka M, 2002, IEEE PHOTONIC TECH L, V14, P675, DOI 10.1109/68.998722
THEVENAZ L, 2000, TRENDS OPTICAL NONDE, P447
USAI R, 2002, P 15 INT C OPT FIB S, P11
Valente LCG, 2003, IEEE SENS J, V3, P31, DOI 10.1109/JSEN.2003.810106
2005, P 17 INT C OPT FIB S
2003, P 16 INT C OPT FIB S
2002, P 15 INT C OPT FIB S, P1
NR 55
TC 15
Z9 15
U1 0
U2 5
PU JAPAN SOC APPLIED PHYSICS
PI TOKYO
PA KUDAN-KITA BUILDING 5TH FLOOR, 1-12-3 KUDAN-KITA, CHIYODA-KU, TOKYO,
102-0073, JAPAN
SN 0021-4922
J9 JPN J APPL PHYS 1
JI Jpn. J. Appl. Phys. Part 1 - Regul. Pap. Brief Commun. Rev. Pap.
PD AUG
PY 2006
VL 45
IS 8B
BP 6616
EP 6625
DI 10.1143/JJAP.45.6616
PG 10
WC Physics, Applied
SC Physics
GA 081CQ
UT WOS:000240295200010
DA 2018-01-22
ER

PT J
AU Ogino, M
Kikuchi, M
Asada, M
AF Ogino, Masaki
Kikuchi, Masaaki
Asada, Minoru
TI Visuo-motor learning for face-to-face pass between heterogeneous
humanoids
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article
DE humanoid; optic flow; face-to-face pass; sensorimotor map
AB Humanoid behavior generation is one of the most formidable issues due to its
complicated structure with many degrees of freedom. This paper proposes a
controller for a humanoid to cope with this issue. A given task is decomposed into
a sequence of modules first, each of which consists of a set of module primitives
that have control parameters to realize the appropriate primitive motions. Then,
these parameters are learned by sensorimotor maps between visual information (flow)
and motor commands. The controller accomplishes a given task by selecting a module,
a module primitive in the selected module, and its appropriate control parameters
learned in advance. A face-to-face ball pass in a RoboCup context is chosen as an
example task. (To the best of our knowledge, this is the first trial.) The
corresponding modules are approaching a ball, kicking a ball to the opponent, and
trapping a ball coming to the player. In order to show the validity, the method is
applied to two different humanoids, independently, and they succeed in realizing
the face-to-face pass for more than three rounds. (C) 2006 Elsevier B.V. All rights
reserved.
C1 Osaka Univ, Grad Sch Engn, Dept Adapt Machine Syst, Osaka, Japan.
Osaka Univ, Grad Sch Engn, HANDAI Frontier Res Ctr, Osaka, Japan.
RP Ogino, M (reprint author), Osaka Univ, Grad Sch Engn, Dept Adapt Machine Syst,
2-1 YamadaOka, Osaka, Japan.
EM ogino@er.ams.eng.osaka-u.ac.jp
CR Fitzpatrick P, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P2161
FUJIWARA K, 2004, P 2004 IEEE INT C RO, P1077
GRILLNER S, 1985, SCIENCE, V228, P143, DOI 10.1126/science.3975635
KAJITA S, 2004, P IEEE INT C ROB AUT, P629
Kitano H, 2000, ADV ROBOTICS, V13, P723
Kohonen T., 1989, SELF ORG ASS MEMORY
Kuniyoshi Y., 2004, INT J HUM ROBOT, V1, P497
MacDorman KF, 2001, IEEE INT CONF ROBOT, P1968, DOI 10.1109/ROBOT.2001.932896
MURASE Y, 2001, P 19 C ROB SOC JAP, P789
NAKAMURA T, 1995, P INT JOINT C ART IN, P126
NISHIWAKI K, 2003, P 3 IEEE INT C HUM R
Ogino M, 2004, ADV ROBOTICS, V18, P677, DOI 10.1163/1568553041719519
Seara JF, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P31, DOI 10.1109/IRDS.2002.1041357
Taga G, 1998, BIOL CYBERN, V78, P9, DOI 10.1007/s004220050408
Tsuchiya K., 2001, P S INT AUT VEH, P271
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
NR 16
TC 0
Z9 0
U1 0
U2 1
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD JUN 30
PY 2006
VL 54
IS 6
BP 419
EP 427
DI 10.1016/j.robot.2006.03.001
PG 9
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 056IM
UT WOS:000238515700001
DA 2018-01-22
ER

PT J
AU Harada, K
Kajita, S
Kaneko, K
Hirukawa, H
AF Harada, Kensuke
Kajita, Shuuji
Kaneko, Kenji
Hirukawa, Hirohisa
TI Dynamics and balance of a humanoid robot during manipulation tasks
SO IEEE TRANSACTIONS ON ROBOTICS
LA English
DT Article; Proceedings Paper
CT IEEE/RSJ International Conference on Intelligent Robots and Systems
CY OCT 27-31, 2003
CL LAS VEGAS, NV
SP IEEE Robot & Automat Soc, IEEE Ind Elect Soc, Robot Soc Japan, Soc Instruments &
Control Engineers, New Technol Fdn
DE dynamical balance; humanoid robot; object manipulation; zero-moment
point (ZMP)
ID FORCE-CLOSURE; STABILITY
AB In this paper, we analyze the balance of a humanoid robot during manipulation
tasks. By defining the generalized zero-moment point (GZMP), we obtain the region
of it for keeping the balance of the robot during manipulation. During
manipulation, the convex hull of the supporting points forms the 3-D convex
polyhedron. The region of the GZMP is obtained by considering the infinitesimal
displacement and the moment about the edges of the convex hull. We show that we can
determine whether or not the robot may keep balance for several styles of
manipulation tasks, such as pushing and pulling an object. The effectiveness of our
proposed method is demonstrated by simulation.
C1 Natl Inst Adv Ind Sci & Technol, Tsukuba, Ibaraki 3058568, Japan.
RP Harada, K (reprint author), Natl Inst Adv Ind Sci & Technol, Tsukuba, Ibaraki
3058568, Japan.
EM kensuke.harada@aist.go.jp
RI KANEKO, Kenji/M-5360-2016; Hirukawa, Hirohisa/B-4209-2017; Kajita,
Shuuji/M-5010-2016
OI KANEKO, Kenji/0000-0002-1888-8787; Hirukawa,
Hirohisa/0000-0001-5779-011X; Kajita, Shuuji/0000-0001-8188-2209
CR Fujita M, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P960
Goswami A, 1999, INT J ROBOT RES, V18, P523, DOI 10.1177/02783649922066376
Harada K, 2002, J ROBOTIC SYST, V19, P133, DOI 10.1002/rob.10028
Harada K., 2004, P IEEE RSJ INT C INT, P1167
HARADA K, 2003, P IEEE INT C ROB AUT, P1627
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
HWANG Y, 2003, P IEEE RSJ INT C INT, P1901
Inoue K., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International
Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065),
P2217, DOI 10.1109/ROBOT.2000.846357
KAGAMI S, 2000, P IEEE RSJ INT C INT, P1559
Kaneko K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P2431, DOI 10.1109/IRDS.2002.1041632
KITAGAWA T, 1999, P 17 ANN C RSJ, P1191
LYNCH KM, 1992, 1992 IEEE INTERNATIONAL CONF ON ROBOTICS AND AUTOMATION :
PROCEEDINGS, VOLS 1-3, P2269, DOI 10.1109/ROBOT.1992.219921
MATTIKALLI R, 1995, IEEE T ROBOTIC AUTOM, V11, P374, DOI 10.1109/70.388779
MCGHEE R B, 1968, Mathematical Biosciences, V3, P331, DOI 10.1016/0025-
5564(68)90090-4
Messuri D. A., 1985, IEEE Journal of Robotics and Automation, VRA-1, P132
NGUYEN VD, 1988, INT J ROBOT RES, V7, P3, DOI 10.1177/027836498800700301
Papadopoulos EG, 1996, IEEE INT CONF ROBOT, P3111, DOI 10.1109/ROBOT.1996.509185
Salisbury J., 1982, ASME, V104, P33
Sugihara T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1404, DOI 10.1109/ROBOT.2002.1014740
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
Yokoyama K., 2003, P IEEE INT C ROB AUT, P2985
Yoneda K, 1996, IROS 96 - PROCEEDINGS OF THE 1996 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS - ROBOTIC INTELLIGENCE INTERACTING
WITH DYNAMIC WORLDS, VOLS 1-3, P870, DOI 10.1109/IROS.1996.571067
TAKENAKA T, 1998, Patent No. H10230485
TAKENAKA, 2001, Patent No. 3132156
NR 24
TC 51
Z9 53
U1 0
U2 5
PU IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
PI PISCATAWAY
PA 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
SN 1552-3098
EI 1941-0468
J9 IEEE T ROBOT
JI IEEE Trans. Robot.
PD JUN
PY 2006
VL 22
IS 3
BP 568
EP 575
DI 10.1109/TRO.2006.870649
PG 8
WC Robotics
SC Robotics
GA 054ES
UT WOS:000238360200015
DA 2018-01-22
ER

PT J
AU Solis, J
Chida, K
Suefuji, K
Takanishi, A
AF Solis, Jorge
Chida, Keisuke
Suefuji, Kei
Takanishi, Atsuo
TI The development of the anthropomorphic flutist robot at Waseda
University
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE humanoid robots; human/robot interaction; music; flute
ID WF-4
AB The development of the flutist robot at Waseda University since 1990 has enabled
a better understanding of the motor control functions required for playing the
flute. Moreover, it has introduced novel ways of interaction between human beings
and humanoid robots such as: performing a musical score together in real time and
transferring skills to flutist beginners. In this paper, the development of the
Waseda Flutist Robot No. 4 Refined (WF-4R) is presented. The mechanical design of
the components of the robot and the control architecture are detailed. In order to
efficiently control and coordinate the motion of each of the simulated organs of
the robot, an algorithm was proposed to extract the features required to perform a
score based on human performance. This algorithm was divided into two phases: sound
calibration and music score performance. Finally, an experimental setup was done to
verify the effectiveness of each of the phases by analyzing the time and frequency
domain responses from recordings of the robot performances. The WF-4R is able to
perform from musical scores quite similar to human.
C1 Waseda Univ, Dept Engn Mech, Tokyo, Japan.
Res Fellow Japanese Soc Promot Sci, Tokyo, Japan.
Waseda Univ, Grad Sch Sci & Engn, Tokyo, Japan.
Waseda Univ, Humaniod Robot Inst, Tokyo, Japan.
RP Solis, J (reprint author), Waseda Univ, Dept Engn Mech, Tokyo, Japan.
EM solis@kurenai.waseda.jp; takanisi@waseda.jp
CR Ando Y., 1970, J ACOUST SOC JAPAN, V26, P297
Chida K, 2004, IEEE INT CONF ROBOT, P152, DOI 10.1109/ROBOT.2004.1307144
GARTNER J, 1980, VIBRATO UNTER BESOND, P168
Isoda S, 2003, IEEE INT CONF ROBOT, P3582
KAPEK K, 1990, RADICAL CTR, P416
KATO J, 1972, P 4 INT S EXP CONTR, P458
ROSHEIM M, 1994, ROBOT EVOLUTION DEV, P27
SHEPARD M, 1980, LOVE YOUR FLUTE, P112
Solis J, 2004, IEEE INT CONF ROBOT, P146, DOI 10.1109/ROBOT.2004.1307143
SOLIS J, 2005, IEEE INT C INT ROB S, P1929
SOLIS J, 2005, INT WORKSH ROB HUM I, P450
Takahashi A, 1996, PROCEEDINGS OF THE ASP-DAC '97 - ASIA AND SOUTH PACIFIC
DESIGN AUTOMATION CONFERENCE 1997, P37, DOI 10.1109/IROS.1996.570624
TAREK MS, 2003, J INTELL ROBOT SYST, V38, P197
VAUCANSON J, 1985, MECANISME FLATEUR AU, P78
NR 14
TC 22
Z9 22
U1 1
U2 5
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD JUN
PY 2006
VL 3
IS 2
BP 127
EP 151
DI 10.1142/S0219843606000709
PG 25
WC Robotics
SC Robotics
GA 214SU
UT WOS:000249759200001
DA 2018-01-22
ER

PT J
AU Ogino, M
Toichi, H
Yoshikawa, Y
Asada, M
AF Ogino, Masaki
Toichi, Hideki
Yoshikawa, Yuichiro
Asada, Minoru
TI Interaction rule learning with a human partner based on an imitation
faculty with a simple visuo-motor mapping
SO ROBOTICS AND AUTONOMOUS SYSTEMS
LA English
DT Article; Proceedings Paper
CT IEEE International Conference on Robotics and Automation (ICRA)
CY APR 18-22, 2005
CL Barcelona, SPAIN
SP IEEE
DE imitation learning; communication; humanoid; ISOMAP
AB Imitation has been receiving increasing attention from the viewpoint of not
simply generating new motions but also the emergence of communication. This paper
proposes a system for a humanoid who obtains new motions through learning the
interaction rules with a human partner based on the assumption of the mirror
system. First, a humanoid learns the correspondence between its own posture and the
partner's one on the ISOMAPs supposing that a human partner imitates the robot
motions. Based on this correspondence, the robot can easily transfer the observed
partner's gestures to its own motion. Then, this correspondence enables a robot to
acquire the new motion primitive for the interaction. Furthermore, through this
process, the humanoid learns an interaction rule that control gesture turn-taking.
The preliminary results and future issues are given. (c) 2006 Elsevier B.V. All
rights reserved.
C1 Osaka Univ, Grad Sch Engn, Dept Adap Machine Syst, Suita, Osaka 5650871, Japan.
Osaka Univ, Grad Sch Engn, HANDAI Frontier Res Ctr, Suita, Osaka 5650871, Japan.
RP Ogino, M (reprint author), Osaka Univ, Grad Sch Engn, Dept Adap Machine Syst, 2-
1 Yamadaoka, Suita, Osaka 5650871, Japan.
EM ogino@er.ams.eng.osaka-u.ac.jp
CR Billard A, 2004, ROBOT AUTON SYST, V47, P65, DOI 10.1016/j.robot.2004.03.001
Bluethmann W, 2003, AUTON ROBOT, V14, P179, DOI 10.1023/A:1022231703061
KUNIYOSHI Y, 2003, P IEEE INT C ROB AUT, P3132
Pavlovic VI, 1997, IEEE T PATTERN ANAL, V19, P677, DOI 10.1109/34.598226
Rizzolatti G, 2004, ANNU REV NEUROSCI, V27, P169, DOI
10.1146/annurev.neuro.27.070203.144230
Suda R, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P1102, DOI 10.1109/IRDS.2002.1043878
Tenenbaum JB, 2000, SCIENCE, V290, P2319, DOI 10.1126/science.290.5500.2319
NR 7
TC 12
Z9 12
U1 0
U2 1
PU ELSEVIER SCIENCE BV
PI AMSTERDAM
PA PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
SN 0921-8890
EI 1872-793X
J9 ROBOT AUTON SYST
JI Robot. Auton. Syst.
PD MAY 31
PY 2006
VL 54
IS 5
BP 414
EP 418
DI 10.1016/j.robot.2006.01.005
PG 5
WC Automation & Control Systems; Computer Science, Artificial Intelligence;
Robotics
SC Automation & Control Systems; Computer Science; Robotics
GA 045WX
UT WOS:000237774500009
DA 2018-01-22
ER

PT J
AU Yokoi, K
Nakashima, K
Kobayashi, M
Mihune, H
Hasunuma, H
Yanagihara, Y
Ueno, T
Gokyuu, T
Endou, K
AF Yokoi, Kazuhito
Nakashima, Katsumi
Kobayashi, Masami
Mihune, Humisato
Hasunuma, Hitoshi
Yanagihara, Yoshitaka
Ueno, Takao
Gokyuu, Takuya
Endou, Ken
TI A tele-operated humanoid operator
SO INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
LA English
DT Article; Proceedings Paper
CT 9th International Symposium on Experimental Robotics (ISER)
CY JUN 18-21, 2004
CL Singapore, SINGAPORE
DE humanoid robot; tele-operation; remote control; industrial vehicle;
backhoe; master arm; master foot
AB This is the first successful trial of remotely controlling a humanoid robot to
drive an industrial vehicle in lieu of a human operator These results were achieved
through the development of three technologies: 1) remote-control technology to
instruct the humanoid to perform total-body movements under remote control 2) a
remote-control system to execute tasks, and protection technology to protect the
humanoid against the shock and vibrations of its operating seat and against
influences of the natural environment, such as rain and dust, and 3)full-body
operation-control technology to autonomously control the humanoid's total-body
movements to prevent the robot from falling over. The humanoid has promising
application potential for restoration work in environments impacted by catastrophes
and in civil-engineering and construction-project sites where it can work safely
and efficiently.
C1 Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res Inst, Tsukuba, Ibaraki
3058568, Japan.
Kawasaki Heavy Ind Co Ltd, Syst Technol Dev Ctr, Akashi, Hyogo 6738666, Japan.
Tokyu Construct Co Ltd, Inst Technol, Kanagawa 2291124, Japan.
RP Yokoi, K (reprint author), Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res
Inst, 1-1-1 Umezono, Tsukuba, Ibaraki 3058568, Japan.
EM Kazuhito.Yokoi@aist.go.jp; nakasima@tech.khi.co.jp;
kobayasi@tech.khi.co.jp; mifune@tech.khi.co.jp; hasunuma@tech.khi.co.jp;
yanagihara.yoshitaka@tokyu-cnst.co.jp; ueno.takao@tokyu-cnst.co.jp;
gokyuu.takuya@tokyu-cnst.co.jp; endou.ken@tokyu-cnst.co.jp
RI Yokoi, Kazuhito/K-2046-2012
OI Yokoi, Kazuhito/0000-0003-3942-2027
CR Ambrose RO, 2000, IEEE INTELL SYST APP, V15, P57, DOI 10.1109/5254.867913
Burdea G. C., 1996, FORCE TOUCH FEEDBACK
FERRELL WR, 1967, IEEE SPECTRUM, V4, P81, DOI 10.1109/MSPEC.1967.5217126
Gambao E, 2002, IEEE ROBOT AUTOM MAG, V9, P4, DOI 10.1109/MRA.2002.993150
Gienger M, 2001, IEEE INT CONF ROBOT, P4140, DOI 10.1109/ROBOT.2001.933265
Hamel WR, 2001, IEEE INT CONF ROBOT, P638, DOI 10.1109/ROBOT.2001.932622
Hasunuma H, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P2246, DOI 10.1109/ROBOT.2002.1013566
HASUNUMA H, 2000, P INT C MACH AUT, P567
Hirai S., 1990, Proceedings. IROS '90. IEEE International Workshop on
Intelligent Robots and Systems '90. Towards a New Frontier of Applications (Cat.
No.90TH0332-7), P349, DOI 10.1109/IROS.1990.262410
HIROSE M, 2001, P ADV SCI I 2001, P1
Hollerbach J.M., 1985, ROBOTICS RES, P215
ISHIDA T, 2001, P 2001 IEEE RSJ INT
JARVIS RA, 1999, P FIELD SERV ROB, P238
Kagami S, 2001, IEEE INT CONF ROBOT, P2431, DOI 10.1109/ROBOT.2001.932986
Kaneko K, 2004, IEEE INT CONF ROBOT, P1083, DOI 10.1109/ROBOT.2004.1307969
Salcudean SE, 1998, IEEE INT CONF ROBOT, P133, DOI 10.1109/ROBOT.1998.676337
Thorpe C. E., 1990, VISION NAVIGATION CA
Yamaguchi J, 1999, ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
AUTOMATION, VOLS 1-4, PROCEEDINGS, P368, DOI 10.1109/ROBOT.1999.770006
Yokoi K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1117
Yokoi K, 2004, INT J ROBOT RES, V23, P351, DOI 10.1177/0278364904042194
Yokoi K., 2004, INT J HUM ROBOT, V1, P409
NR 21
TC 11
Z9 12
U1 2
U2 8
PU SAGE PUBLICATIONS LTD
PI LONDON
PA 1 OLIVERS YARD, 55 CITY ROAD, LONDON EC1Y 1SP, ENGLAND
SN 0278-3649
EI 1741-3176
J9 INT J ROBOT RES
JI Int. J. Robot. Res.
PD MAY-JUN
PY 2006
VL 25
IS 5-6
BP 593
EP 602
DI 10.1177/0278364906065900
PG 10
WC Robotics
SC Robotics
GA 051FI
UT WOS:000238146600012
DA 2018-01-22
ER

PT J
AU Ito, M
Noda, K
Hoshino, Y
Tani, J
AF Ito, Masato
Noda, Kuniaki
Hoshino, Yukiko
Tani, Jun
TI Dynamic and interactive generation of object handling behaviors by a
small humanoid robot using a dynamic neural network model
SO NEURAL NETWORKS
LA English
DT Article; Proceedings Paper
CT 19th Annual Conference on Neural Information Processing Systems (NIPS
05)
CY DEC, 2005
CL Vancouver, CANADA
DE learning of object handling behavior; dynamical systems approach;
recurrent neural network
ID CHIMPANZEES PAN-TROGLODYTES; MANIPULATORY ACTIONS; IMITATION; SYSTEMS;
TIME; TASK
AB This study presents experiments on the learning of object handling behaviors by
a small humanoid robot using a dynamic neural network model, the recurrent neural
network with parametric bias (RNNPB). The first experiment showed that after the
robot learned different types of ball handling behaviors using human direct
teaching, the robot was able to generate adequate ball handling motor sequences
situated to the relative position between the robot's hands and the ball. The same
scheme was applied to a block handling learning task where it was shown that the
robot can switch among learned different block handling sequences, situated to the
ways of interaction by human supporters. Our analysis showed that entrainment of
the internal memory structures of the RNNPB through the interactions of the objects
and the human supporters are the essential mechanisms for those observed situated
behaviors of the robot (c) 2006 Elsevier Ltd. All rights reserved.
C1 Sony Intelligence Dynam Labs Inc, Shinagawa Ku, Tokyo 1410022, Japan.
RIKEN, Brain Sci Inst, Wako, Saitama 3510198, Japan.
RP Ito, M (reprint author), Sony Intelligence Dynam Labs Inc, Shinagawa Ku,
Takanawa Muse Bldg 4F,3-14-13, Tokyo 1410022, Japan.
EM masato@idl.sony.co.jp; noda@idl.sony.co.jp; yukiko@idl.sony.co.jp;
tani@brain.riken.go.jp
CR Andry P, 2001, IEEE T SYST MAN CY A, V31, P431, DOI 10.1109/3468.952717
Baron-Cohen S., 1996, MINDBLINDNESS ESSAY
BEER RD, 1995, ARTIF INTELL, V72, P173, DOI 10.1016/0004-3702(94)00005-L
Bianco R, 2004, ADAPT BEHAV, V12, P37, DOI 10.1177/105971230401200102
Billard A, 2002, FROM ANIM ANIMAT, P281
CALINON S, 2005, P IEEE INT C ROB AUT
Nehaniv C., 2002, IMITATION ANIMALS AR
ELMAN JL, 1990, COGNITIVE SCI, V14, P179, DOI 10.1016/0364-0213(90)90002-E
Ijspeert A. J., 2003, ADV NEURAL INFORM PR, V15, P1547
Ito M, 2004, ADAPT BEHAV, V12, P93, DOI 10.1177/105971230401200202
ITO M, 2004, P 11 INT C NEUR INF, P592
Ito S, 2003, SCIENCE, V302, P120, DOI 10.1126/science.1087847
JORDAN MI, 1992, COGNITIVE SCI, V16, P307, DOI 10.1016/0364-0213(92)90036-T
Kosuge K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P3459
MILNER B, 1963, ARCH NEUROL-CHICAGO, V9, P90, DOI
10.1001/archneur.1963.00460070100010
Maitland F., 1994, J CONTIN EDUC HEALTH, V14, P4, DOI DOI 10.1002/CHP.4750140102
Myowa-Yamakoshi M, 2000, J COMP PSYCHOL, V114, P381, DOI 10.1037/0735-
7036.114.4.381
Myowa-Yamakoshi M, 1999, J COMP PSYCHOL, V113, P128, DOI 10.1037//0735-
7036.113.2.128
Nakahara K, 2002, SCIENCE, V295, P1532, DOI 10.1126/science.1067653
Nehaniv C, 2001, CYBERNET SYST, V32, P1, DOI 10.1080/019697201300001795
Ogata T, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P162
POLLACK JB, 1991, MACH LEARN, V7, P227, DOI 10.1023/A:1022651113306
Rumelhart D. E., 1986, PARALLEL DISTRIBUTED
Schaal S, 1999, TRENDS COGN SCI, V3, P233, DOI 10.1016/S1364-6613(99)01327-3
Schaal S, 2003, PHILOS T R SOC B, V358, P537, DOI 10.1098/rstb.2002.1258
Schaal S, 1996, J MOTOR BEHAV, V28, P165, DOI 10.1080/00222895.1996.9941743
Sugita Y, 2005, ADAPT BEHAV, V13, P33, DOI 10.1177/105971230501300102
Tani J, 1999, NEURAL NETWORKS, V12, P1131, DOI 10.1016/S0893-6080(99)00060-X
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
TANI J, 1995, BIOL CYBERN, V72, P365, DOI 10.1007/s004220050138
Tani J, 2003, NEURAL NETWORKS, V16, P11, DOI 10.1016/S0893-6080(02)00214-9
Tani J., 2005, P 2 INT WORKSH MAN M, P123
VANGELDER T, 1998, BEHAV BRAIN SCI, V150, P45
Weigend AS, 1995, INT J NEURAL SYST, V6, P373, DOI 10.1142/S0129065795000251
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
Yokoyama K., 2003, P IEEE INT C ROB AUT, P2985
NR 36
TC 52
Z9 52
U1 0
U2 3
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 0893-6080
EI 1879-2782
J9 NEURAL NETWORKS
JI Neural Netw.
PD APR
PY 2006
VL 19
IS 3
BP 323
EP 337
DI 10.1016/j.neunet.2006.02.007
PG 15
WC Computer Science, Artificial Intelligence; Neurosciences
SC Computer Science; Neurosciences & Neurology
GA 053HQ
UT WOS:000238296600007
PM 16618536
DA 2018-01-22
ER

PT J
AU Harada, K
Kajita, S
Kaneko, K
Hirukawa, H
AF Harada, Kensuke
Kajita, Shuuji
Kaneko, Kenji
Hirukawa, Hirohisa
TI An analytical method for real-time gait planning for humanoid robots
SO INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS
LA English
DT Article
DE humanoid robot; biped gait; ZMP; real-time; analytical solution
AB This paper studies real-time gait planning for a humanoid robot. By
simultaneously planning the trajectories of the COG (Center of Gravity) and the ZMP
(Zero Moment Point), a fast and smooth change of gait can be realized. The change
of gait is also realized by connecting the newly calculated trajectories to the
current ones. While we propose two methods for connecting two trajectories, i.e.
the real-time method and the quasi-real-time one, we show that a stable change of
gait can be realized by using the quasi-real-time method even if the change of the
step position is significant. The effectiveness of the proposed methods are
confirmed by simulation and experiment.
C1 Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res Inst, Humanoid Res Grp,
Tsukuba, Ibaraki 305, Japan.
RP Harada, K (reprint author), Natl Inst Adv Ind Sci & Technol, Intelligent Syst
Res Inst, Humanoid Res Grp, 1-1-1 Umezono, Tsukuba, Ibaraki 305, Japan.
EM kensuke.harada@aist.go.jp
RI KANEKO, Kenji/M-5360-2016; Hirukawa, Hirohisa/B-4209-2017; Kajita,
Shuuji/M-5010-2016
OI KANEKO, Kenji/0000-0002-1888-8787; Hirukawa,
Hirohisa/0000-0001-5779-011X; Kajita, Shuuji/0000-0001-8188-2209
CR HARADA K, 2004, P IEEE INT C ROB AUT
HIRUKAWA H, 2001, P INT S ROB RES VICT
KAGAMI S, 2000, P 2000 IEEE RAS INT
Kajita S, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P1644
KAJITA S, 1992, IEEE T ROBOTIC AUTOM, V8, P431, DOI 10.1109/70.149940
KAJITA S, 2003, P IEEE INT C ROB AUT, P1620
KANEHIRO F, 2001, P IEEE RSJ INT C INT
Kaneko K., 2004, P IEEE INT C ROB AUT
KURAZUME R, 2003, P IEEE INT C ROB AUT, P925
LIM H, 2002, P IEEE INT C ROB AUT, P3111
NAGASAKA K, 1999, P IEEE INT C SYST MA
Nishiwaki K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2684, DOI 10.1109/IRDS.2002.1041675
Takanishi A., 1985, Proceedings of '85 International Conference on Advanced
Robotics, P459
VUKOBRAT.M, 1969, IEEE T BIO-MED ENG, VBM16, P1, DOI 10.1109/TBME.1969.4502596
TAKENAKA T, 1998, Patent No. H10217161
NR 15
TC 56
Z9 59
U1 2
U2 8
PU WORLD SCIENTIFIC PUBL CO PTE LTD
PI SINGAPORE
PA 5 TOH TUCK LINK, SINGAPORE 596224, SINGAPORE
SN 0219-8436
EI 1793-6942
J9 INT J HUM ROBOT
JI Int. J. Humanoid Robot.
PD MAR
PY 2006
VL 3
IS 1
BP 1
EP 19
DI 10.1142/S0219843606000643
PG 19
WC Robotics
SC Robotics
GA 213LR
UT WOS:000249669400001
DA 2018-01-22
ER

PT J
AU Lim, HO
Hyon, SH
Setiawan, SA
Takanishi, A
AF Lim, HO
Hyon, SH
Setiawan, SA
Takanishi, A
TI Quasi-human biped walking
SO ROBOTICA
LA English
DT Article
DE interaction; human-follow walking; compensatory motion; walking pattern;
pattern synthesis
ID ROBOT; MANIPULATORS; LOCOMOTION; REDUNDANT; SYSTEMS
AB Our goal is to develop biped humanoid robots capable of working stably in a
human living and working space, with a focus on their physical construction and
motion control. At the first stage, we have developed a human-like biped robot,
WABIAN (WAseda BIped humANoid), which has a thirty-five mechanical degrees of
freedom. Its height is 1.66 [m] at id its weight 107.4 [kg]. In this paper, a
moment compensation method is described for stability, which is based on the motion
of its head, legs and arms. Also, a follow walking method is proposed which is
based on a pattern switching technique. By a combination of both methods, the biped
robot is able to perform dynamic stamping, walking forward and backward in a
continuous time while someone is pushing or pulling its hand in such a way. Using
WABIAN, human-fellow walking experiments are conducted, and the effectiveness of
the methods are verified.
C1 Kanagawa Univ, Dept Mech Engn, Kanagawa Ku, Yokohama, Kanagawa 2218686, Japan.
Waseda Univ, Humanoid Robot Inst, Tokyo, Japan.
Waseda Univ, Dept Engn Mech, Tokyo, Japan.
RP Lim, HO (reprint author), Kanagawa Univ, Dept Mech Engn, Kanagawa Ku, 3-27-1
Rokkakubashi, Yokohama, Kanagawa 2218686, Japan.
EM holim@ieee.org
CR CHANNON PH, 1992, ROBOTICA, V10, P165, DOI 10.1017/S026357470000758X
FARNSWORTH RL, 1977, IEEE T AUTOMATIC CON, V22, P452
FURUSHO J, 1986, J DYN SYST-T ASME, V108, P111, DOI 10.1115/1.3143752
HEMAMI H, 1979, IEEE T AUTOMAT CONTR, V24, P526, DOI 10.1109/TAC.1979.1102105
Hirai K, 1998, IEEE INT CONF ROBOT, P1321, DOI 10.1109/ROBOT.1998.677288
Kajita S, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P31, DOI 10.1109/ROBOT.2002.1013335
KAJITA S, 1991, P INT C ADV ROB, P741
Kaneko K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P2431, DOI 10.1109/IRDS.2002.1041632
KAZEROONI H, 1990, IEEE T SYST MAN CYB, V20, P450, DOI 10.1109/21.52555
LIM HO, 2001, INT S ROB SEOUL KOR, P1551
LIN ZC, 1995, J ROBOTIC SYST, V12, P301, DOI 10.1002/rob.4620120503
Lorch O, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P2484, DOI 10.1109/IRDS.2002.1041642
MASON MT, 1981, IEEE T SYST MAN CYB, V11, P418, DOI 10.1109/TSMC.1981.4308708
MIURA H, 1984, INT J ROBOT RES, V3, P60, DOI 10.1177/027836498400300206
Nishiwaki K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2684, DOI 10.1109/IRDS.2002.1041675
OGURA Y, 2002, P IEEE INT WORKSH RO, P253
RAIBERT M, 1986, LEGGED ROBOTS THAT B
Silva FM, 1998, 1998 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS - PROCEEDINGS, VOLS 1-3, P394, DOI 10.1109/IROS.1998.724651
Sugihara T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1404, DOI 10.1109/ROBOT.2002.1014740
Takanishi A., 1985, Proceedings of '85 International Conference on Advanced
Robotics, P459
WALKER ID, 1994, IEEE T ROBOTIC AUTOM, V10, P670, DOI 10.1109/70.326571
Yamaguchi J., 1993, P IEEE RSJ INT C INT, P561
NR 22
TC 5
Z9 6
U1 0
U2 2
PU CAMBRIDGE UNIV PRESS
PI NEW YORK
PA 32 AVENUE OF THE AMERICAS, NEW YORK, NY 10013-2473 USA
SN 0263-5747
EI 1469-8668
J9 ROBOTICA
JI Robotica
PD MAR-APR
PY 2006
VL 24
BP 257
EP 268
DI 10.1017/S0263574705002109
PN 2
PG 12
WC Robotics
SC Robotics
GA 037AI
UT WOS:000237118700013
DA 2018-01-22
ER

PT J
AU Nabeshima, C
Kuniyoshi, Y
Lungarella, M
AF Nabeshima, Cota
Kuniyoshi, Yasuo
Lungarella, Max
TI Adaptive body schema for robotic tool-use
SO ADVANCED ROBOTICS
LA English
DT Article
DE body image; tool-use; adaptation; visuo-tactile integration;
developmental robotics
ID NEURAL BASES; LIMBS
AB The development and expression of many higher-level cognitive functions, such as
imitation, spatial perception and tool-use, relies on a multi-modal representation
of the body known as the body schema. Although many studies support the hypothesis
that the body schema is adaptive and alterable throughout ontogenetic development,
the mechanisms underlying its plasticity have yet to be clarified. Here, we argue
that the temporal integration of multisensory information is a plausible candidate
mechanism to explain how manipulated objects (e.g., tools) can become incorporated
into the body schema. To demonstrate the validity of our idea, we introduce a model
of body schema adaptation instantiated in a small-sized, table-top, tool-using
humanoid. The robot's task is to learn to reach for and touch a visually salient
distant object, first with its 'bare' hand and then - using the acquired know-how -
with a reach-extending tool (a stick). Our experimental results show that in order
to successfully causally relate and integrate vision, touch and proprioception, and
to learn to use the tool, timing is of crucial relevance. On a more general note,
this study also suggests that synthetic modeling might not only be a valid avenue
towards getting a better grasp on results provided by neuropsychology and
neurophysiology, but also a powerful approach for building advanced tool-using
robots.
C1 Univ Tokyo, Grad Sch Informat Sci & Technol, Dept Mechanoinformat, Lab
Intelligent Syst & Informat,Bunkyo Ku, Tokyo, Japan.
RP Nabeshima, C (reprint author), Univ Tokyo, Grad Sch Informat Sci & Technol, Dept
Mechanoinformat, Lab Intelligent Syst & Informat,Bunkyo Ku, Hongo 7-3-1, Tokyo,
Japan.
EM nabesima@isi.imi.i.u-tokyo.ac.jp
CR Beck B, 1980, ANIMAL TOOL BEHAV
Berlucchi G, 1997, TRENDS NEUROSCI, V20, P560, DOI 10.1016/S0166-2236(97)01136-3
Craig J. J., 1988, ADAPTIVE CONTROL MEC
GALLAGHER S, 1986, J MIND BEHAV, V7, P541
Gallagher S, 1996, PHILOS PSYCHOL, V9, P211, DOI 10.1080/09515089608573181
Gallagher S., 2005, BODY SHAPES MIND
Ganapathy S., 1984, P INT C ROB AUT, V1, P130
Haggard P., 2005, HIGHER ORDER MOTOR D, P261
Head H, 1911, BRAIN, V34, P102, DOI 10.1093/brain/34.2-3.102
Holmes Nicholas P, 2004, Cogn Process, V5, P94, DOI 10.1007/s10339-004-0013-3
Iriki A, 1996, NEUROREPORT, V7, P2325
Itti L, 2001, NAT REV NEUROSCI, V2, P194, DOI 10.1038/35058500
Johnson-Frey SH, 2004, TRENDS COGN SCI, V8, P71, DOI 10.1016/j.tics.2003.12.002
Kawato M, 1999, CURR OPIN NEUROBIOL, V9, P718, DOI 10.1016/S0959-4388(99)00028-8
Lungarella M, 2003, CONNECT SCI, V15, P151, DOI 10.1080/09540090310001655110
Maravita A, 2004, TRENDS COGN SCI, V8, P79, DOI 10.1016/j.tics.2003.12.008
Maravita A, 2003, CURR BIOL, V13, pR531, DOI 10.1016/S0960-9822(03)00449-4
Merleau-Ponty M., 1945, PHENOMENOLOGIE PERCE
Metta G, 2003, ADAPT BEHAV, V11, P109, DOI 10.1177/10597123030112004
Morita M, 1996, NEURAL NETWORKS, V9, P1477, DOI 10.1016/S0893-6080(96)00021-4
Nakagawa Y, 1999, SIXTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE
(AAAI-99)/ELEVENTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE (IAAI-99),
P768
OJA E, 1982, J MATH BIOL, V15, P267, DOI 10.1007/BF00275687
Ramachandran V., 1998, PHANTOMS BRAIN PROBI
Ramachandran VS, 1996, P ROY SOC B-BIOL SCI, V263, P377, DOI
10.1098/rspb.1996.0058
RITTER H, 1992, NEUR COMP SELF MAPS
Rizzolatti G, 1997, SCIENCE, V277, P190, DOI 10.1126/science.277.5323.190
Schilder P, 1935, IMAGE APPEARANCE HUM
Shimojo S, 2001, CURR OPIN NEUROBIOL, V11, P505, DOI 10.1016/S0959-
4388(00)00241-5
SIMMEL MARIANNE L., 1958, PROC AMER PHIL SOC, V102, P492
SIMMEL ML, 1961, AM J PSYCHOL, V74, P467, DOI 10.2307/1419756
Spong MW, 1989, ROBOT DYNAMICS CONTR
STOYTCHEV A, 2003, GITCC0344
Stoytchev A, 2005, P IEEE INT C ROB AUT, P3071
Sutton RS, 1998, REINFORCEMENT LEARNI
Wolpert DM, 2001, TRENDS COGN SCI, V5, P487, DOI 10.1016/S1364-6613(00)01773-3
Yoshikawa Y., 2003, P 2 INT S AD MOT AN
NR 36
TC 28
Z9 28
U1 0
U2 6
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2006
VL 20
IS 10
BP 1105
EP 1126
DI 10.1163/156855306778522550
PG 22
WC Robotics
SC Robotics
GA 103ZF
UT WOS:000241925000003
DA 2018-01-22
ER

PT J
AU Minato, T
Shimada, M
Itakura, S
Lee, K
Ishiguro, H
AF Minato, Takashi
Shimada, Michihiro
Itakura, Shoji
Lee, Kang
Ishiguro, Hiroshi
TI Evaluating the human likeness of an android by comparing gaze behaviors
elicited by the android and a person
SO ADVANCED ROBOTICS
LA English
DT Article
DE android; human-robot communication; human likeness; appearance and
behavior; gaze behavior
ID EYE-CONTACT; TURING TEST; LOOKING
AB Our research goal is to discover the principles underlying natural communication
among individuals and to establish a methodology for the development of expressive
humanoid robots. For this purpose we have developed androids that closely resemble
human beings. The androids enable us to investigate a number of phenomena related
to human interaction that could not otherwise be investigated with mechanical-
looking robots. This is because more human-like devices are in a better position to
elicit the kinds of responses that people direct toward each other. Moreover, we
cannot ignore the role of appearance in giving us a subjective impression of human
presence or intelligence. However, this impression is influenced by behavior and
the complex relationship between appearance and behavior. This paper proposes a
hypothesis about how appearance and behavior are related, and maps out a plan for
android research to investigate this hypothesis. We then examine a study that
evaluates the human likeness of androids according to the gaze behavior they
elicit. Studies such as these, which integrate the development of androids with the
investigation of human behavior, constitute a new research area that fuses
engineering and science.
C1 Osaka Univ, Grad Sch Engn, Dept Adapt Machine Syst, Suita, Osaka 5650871, Japan.
Kyoto Univ, Grad Sch Letters, Dept Psychol, Sakyo Ku, Kyoto 6068501, Japan.
Univ Calif San Diego, Dept Psychol, La Jolla, CA 92093 USA.
Adv Telecommun Res Int Int, Intelligent Robot & Commun Labs, Keihanna Sci City,
Kyoto 6190288, Japan.
RP Minato, T (reprint author), Osaka Univ, Grad Sch Engn, Dept Adapt Machine Syst,
2-1 Yamadaoka, Suita, Osaka 5650871, Japan.
EM minato@ams.eng.osaka-u.ac.jp
FU NICHD NIH HHS [R01 HD046526-01]
CR ARGYLE M, 1965, SOCIOMETRY, V28, P289, DOI 10.2307/2786027
Argyle M., 1976, GAZE MUTUAL GAZE
Asada M, 2001, ROBOT AUTON SYST, V37, P185, DOI 10.1016/S0921-8890(01)00157-9
DAIBO I, 1992, J JAPANESE EXPT SOCI, V32, P1
Fong T, 2003, ROBOT AUTON SYST, V42, P143, DOI 10.1016/S0921-8890(02)00372-X
FRENCH RM, 1990, MIND, V99, P53
French RM, 2000, TRENDS COGN SCI, V4, P115, DOI 10.1016/S1364-6613(00)01453-4
GALE A, 1978, BEHAV PROCESS, V3, P271, DOI 10.1016/0376-6357(78)90019-0
Goetz J, 2003, RO-MAN 2003: 12TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, PROCEEDINGS, P55
HEMSLEY GD, 1978, J APPL SOC PSYCHOL, V8, P136, DOI 10.1111/j.1559-
1816.1978.tb00772.x
Ishiguro H, 2001, IND ROBOT, V28, P498, DOI 10.1108/01439910110410051
KAGAWA H, 1997, INSCRUTABLE JAPANESE
Kanda T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1848, DOI 10.1109/ROBOT.2002.1014810
KENDON A, 1967, ACTA PSYCHOL, V26, P22, DOI 10.1016/0001-6918(67)90005-4
KENDON A, 1978, BRIT J SOC CLIN PSYC, V17, P23, DOI 10.1111/j.2044-
8260.1978.tb00891.x
Leathers Dale G., 1997, SUCCESSFUL NONVERBAL
MATSUI D, 2005, P IEEE RSJ INT C INT, P1089, DOI DOI 10.1109/IR0S.2005.1545125
MCCARTHY A, 2003, BIENN M SOC RES CHIL
MCCARTHY A, 2001, ANN M AM PSYCH SOC T
Minato T, 2004, P 17 INT C IND ENG A, P424
Mori M., 1970, ENERGY, V7, P33, DOI DOI 10.1109/MRA.2012.2192811
NAKAOKA S, 2003, P IEEE INT C ROB AUT, P3905
Nishiyama K., 2000, DOING BUSINESS JAPAN
Oztop E., 2005, INT J HUM ROBOT, V2, P537, DOI DOI 10.1142/S0219843605000582
Perrett DI, 1998, COGNITION, V67, P111, DOI 10.1016/S0010-0277(98)00015-8
Previc FH, 1997, PERCEPT MOTOR SKILL, V84, P835, DOI 10.2466/pms.1997.84.3.835
NR 26
TC 22
Z9 22
U1 0
U2 6
PU TAYLOR & FRANCIS LTD
PI ABINGDON
PA 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
SN 0169-1864
EI 1568-5535
J9 ADV ROBOTICS
JI Adv. Robot.
PY 2006
VL 20
IS 10
BP 1147
EP 1163
DI 10.1163/156855306778522505
PG 17
WC Robotics
SC Robotics
GA 103ZF
UT WOS:000241925000005
PM 18985174
OA green_accepted
DA 2018-01-22
ER

PT S
AU Satake, S
Kawashima, H
Imai, M
Hirose, K
Anzai, Y
AF Satake, S
Kawashima, H
Imai, M
Hirose, K
Anzai, Y
BE Shen, HT
Li, JB
Li, ML
Ni, J
Wang, W
TI IRIOS: Interactive news announcer robot system
SO ADVANCED WEB AND NETWORK TECHNOLOGIES, AND APPLICATIONS, PROCEEDINGS
SE Lecture Notes in Computer Science
LA English
DT Article; Proceedings Paper
CT 8th Asia-Pacific Web Conference and Workshops (APWeb 2006)
CY JAN 16-18, 2006
CL Harbin, PEOPLES R CHINA
SP Natl Nat Sci Fdn China, Australian Res Council Res Network EII, Harbin Inst
Technol, Heilongjiang Univ, Hohai Univ, Yellow River Conservat Commiss
AB This paper presents an interactive news announcer robot system IRIOS that
performs on the humanoid robot, Robovie. IRIOS satisfies the interest trend
requirement and the topic change requirement for a capricious user by Time
Conscious(TC)-,TfIdf vector. The results of experiments showed that TC-TfIdf
strategy satisfied both requirements while other strategies did not.
C1 Keio Univ, Grad Sch, Yokohama, Kanagawa, Japan.
RP Satake, S (reprint author), Keio Univ, Grad Sch, 3-14-1 Hiyoshi, Yokohama,
Kanagawa, Japan.
CR ANZAI Y, 1994, ADV ROBOTICS, V8, P357
BAUER T, 2001, 10 INT C INF KNOWL M, P568
BOTTRAUD JC, 2003, P WORKSH ART INT INF
Chen L., 1998, Proceedings of the Second International Conference on Autonomous
Agents, P132, DOI 10.1145/280765.280789
CONNEL M, 2004, TDT 2004 EV
Imai M, 2003, RO-MAN 2003: 12TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN
INTERACTIVE COMMUNICATION, PROCEEDINGS, P199
Imai M, 2000, IEEE RO-MAN 2000: 9TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND
HUMAN INTERACTIVE COMMUNICATION, PROCEEDINGS, P1, DOI 10.1109/ROMAN.2000.892460
Kanda T, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION,
VOLS I-IV, PROCEEDINGS, P1848, DOI 10.1109/ROBOT.2002.1014810
LO YY, 2001, P 2001 TOP DET TRACK
Nallapati R., 2004, P 13 ACM INT C INF K, P446, DOI DOI 10.1145/1031171.1031258
ZABALA S, 2001, GI JAHRESTAUNG, P353
NR 11
TC 0
Z9 0
U1 0
U2 0
PU SPRINGER-VERLAG BERLIN
PI BERLIN
PA HEIDELBERGER PLATZ 3, D-14197 BERLIN, GERMANY
SN 0302-9743
BN 3-540-31158-0
J9 LECT NOTES COMPUT SC
PY 2006
VL 3842
BP 733
EP 740
PG 8
WC Computer Science, Artificial Intelligence; Computer Science, Theory &
Methods; Remote Sensing; Telecommunications
SC Computer Science; Remote Sensing; Telecommunications
GA BDV63
UT WOS:000235659100099
DA 2018-01-22
ER

PT S
AU Noda, K
Ito, M
Hoshino, Y
Tani, J
AF Noda, Kuniaki
Ito, Masato
Hoshino, Yukiko
Tani, Jun
BE Nolfi, S
Baldassarre, G
Calabretta, R
Hallam, JCT
Marocco, D
Meyer, JA
Miglino, O
Parisi, D
TI Dynamic generation and switching of object handling behaviors by a
humanoid robot using a recurrent neural network model
SO FROM ANIMALS TO ANIMATS 9, PROCEEDINGS
SE LECTURE NOTES IN COMPUTER SCIENCE
LA English
DT Article; Proceedings Paper
CT 9th International Conference on Simulation of Adaptive Behavior
CY SEP 25-29, 2006
CL Italian Natl Res Council, Rome, ITALY
SP Appl AI Syst Inc, European Network Adv Artificial Cognit Syst Cognit, Inst
Cognit Sci & Technol, Italian Soc Cognit Sci, Int Soc Adapt Behav, Kteam, Lab
Informat Paris 6, Univ Edinburgh, Univ Pierre Marie Curie, Univ SE Denmark
HO Italian Natl Res Council
ID SYSTEMS
AB The present study describes experiments on a ball handling behavior learning
that is realized by a small humanoid robot with a dynamic neural network model, the
recurrent neural network with parametric bias (RNNPB). The present experiments show
that after the robot learned different types of behaviors through direct human
teaching, the robot was able to switch between two types of behaviors based on the
ball motion dynamics. We analyzed the parametric bias (PB) space to show that each
of the multiple dynamic structures acquired in the RNNPB corresponds with taught
multiple behavior patterns and that the behaviors can be switched by adjusting the
PB values.
C1 Sony Intelligence Dynam Labs Inc, Shinagawa Ku, Tokyo 1410022, Japan.
RIKEN, Brain Sci Inst, Wako, Saitama 3510198, Japan.
RP Noda, K (reprint author), Sony Intelligence Dynam Labs Inc, Shinagawa Ku,
Takanawa Muse Bldg 4F,3-14-13 Higashigotanda, Tokyo 1410022, Japan.
EM noda@idl.sony.co.jp; masato@idl.sony.co.jp; yukiko@idl.sony.co.jp;
tani@brain.riken.go.jp
CR BEER RD, 1995, ARTIF INTELL, V72, P173, DOI 10.1016/0004-3702(94)00005-L
Bianco R, 2004, ADAPT BEHAV, V12, P37, DOI 10.1177/105971230401200102
Ito M, 2004, ADAPT BEHAV, V12, P93, DOI 10.1177/105971230401200202
ITO M, 2004, P 11 INT C NEUR INF, P592
Jordan M. I., 1986, P 8 ANN C COGN SCI S, P531
RUMELHART DE, 1986, NATURE, V323, P533, DOI 10.1038/323533a0
Schaal S, 1996, J MOTOR BEHAV, V28, P165, DOI 10.1080/00222895.1996.9941743
Sugita Y, 2005, ADAPT BEHAV, V13, P33, DOI 10.1177/105971230501300102
Tani J, 1999, NEURAL NETWORKS, V12, P1131, DOI 10.1016/S0893-6080(99)00060-X
Tani J, 2003, IEEE T SYST MAN CY A, V33, P481, DOI 10.1109/TSMCA.2003.809171
Tani J, 2003, NEURAL NETWORKS, V16, P11, DOI 10.1016/S0893-6080(02)00214-9
VANGELDER T, 1998, BEHAV BRAIN SCI
Wolpert DM, 1998, NEURAL NETWORKS, V11, P1317, DOI 10.1016/S0893-6080(98)00066-5
NR 13
TC 1
Z9 1
U1 0
U2 2
PU SPRINGER-VERLAG BERLIN
PI BERLIN
PA HEIDELBERGER PLATZ 3, D-14197 BERLIN, GERMANY
SN 0302-9743
BN 3-540-38608-4
J9 LECT NOTES COMPUT SC
PY 2006
VL 4095
BP 185
EP 196
PG 12
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA BFI94
UT WOS:000242125300016
DA 2018-01-22
ER

PT S
AU Simo, A
Nishida, Y
Nagashima, K
AF Simo, Altion
Nishida, Yoshifumi
Nagashima, Koichi
BE Zha, H
Pan, ZG
Thwaites, H
Addison, AC
Forte, M
TI A humanoid robot to prevent children accidents
SO INTERACTIVE TECHNOLOGIES AND SOCIOTECHNICAL SYSTEMS
SE Lecture Notes in Computer Science
LA English
DT Article; Proceedings Paper
CT 12th International Conference on Virtual Systems and Multimedia (VSMM)
CY OCT 18-20, 2006
CL Xian, PEOPLES R CHINA
SP Virtual Syst & Multimedia Soc, Xian Jiaotong Univ, China Soc Image & Graph, VR
Comm, Nat Soc Fdn China, Int Journal Virtual Real, Int Journal Automat & Comp,
Zhejiang Univ, Peking Univ, Tsinghua Univ
DE children accidents; robotics; ubiquitous living environments
AB We describe in this paper the implementation and experimentation of a humanoid
robot system, aimed at the prevention of children accidents in everyday indoor
activities. The main focus of this research work is placed on the "on demand"
interaction between the robot and the child in the relevant context that the robot
is used (preventing child accidents). Different controlling strategies and
attractive interfaces were considered in designing this system, to result in an
effective use of the robot when attempting to prevent child accidents. This is
achieved through an active attraction of child attention as well as passive
interaction. Some experimental results are given and conclusions are drawn, and
some future implications are considered in refining not only the independent robot
control, but also the effectiveness of this system.
C1 AIST Adv Ind Sci & Technol, DHRC, Koto Ku, Tokyo 1350064, Japan.
R Lab, Suginami Ward, Tokyo 1660004, Japan.
RP Simo, A (reprint author), AIST Adv Ind Sci & Technol, DHRC, Koto Ku, 2-41-6
Aomi, Tokyo 1350064, Japan.
EM altion.simo@aist.go.jp
CR CHARLES F, 2005, P 1 INT C TECHN INT, P254
DAUTENHAHN K, 2005, P IEEE INT C INT ROB, P788
FUJITA Y, 2005, P ROBOCASA WORKSH 20, P36
Kahn PH., 2004, P C HUM FACT COMP SY, P1449
Kanda T, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND
SYSTEMS, VOLS 1-3, PROCEEDINGS, P1265, DOI 10.1109/IRDS.2002.1043918
Kozima H., 2005, P AAAI SPRING S DEV, P111
NAKAUCHI Y, 2005, P IEEE RSJ INT C INT, P76
Nishida Y, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL
CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P785
SIMO A, 2005, P 4 INT C COMP INT C, P61
NR 9
TC 0
Z9 0
U1 0
U2 0
PU SPRINGER-VERLAG BERLIN
PI BERLIN
PA HEIDELBERGER PLATZ 3, D-14197 BERLIN, GERMANY
SN 0302-9743
BN 3-540-46304-6
J9 LECT NOTES COMPUT SC
PY 2006
VL 4270
BP 476
EP 485
PG 10
WC Computer Science, Artificial Intelligence; Computer Science, Information
Systems; Computer Science, Software Engineering; Computer Science,
Theory & Methods
SC Computer Science
GA BFG65
UT WOS:000241762900052
DA 2018-01-22
ER

PT J
AU Ohta, H
AF Ohta, Hiroyuki
TI Prologue to the special issue on "Tribology in advanced science"
SO JOURNAL OF JAPANESE SOCIETY OF TRIBOLOGISTS
LA Japanese
DT Editorial Material
DE tribology; advanced science; H-II A rocket; satellite; SUBARU;
structural control technology; Shinkansen humanoid robot
C1 Nagaoka Univ Technol, Dept Mech Engn, Niigata 9402188, Japan.
RP Ohta, H (reprint author), Nagaoka Univ Technol, Dept Mech Engn, 1603-1
Kamitoimioka, Niigata 9402188, Japan.
EM ohta@mech.nagaokaut.ac.jp
NR 0
TC 0
Z9 0
U1 0
U2 0
PU JAPAN SOC TRIBOLOGISTS
PI TOKYO
PA KIKAI SHINKO KAIKAN NO 407-2 5-8 SHIBA-KOEN 3-CHOME MINATO-KU, TOKYO,
105, JAPAN
SN 0915-1168
J9 J JPN SOC TRIBOLOGIS
JI J. Jpn. Soc. Tribol.
PY 2006
VL 51
IS 6
BP 418
EP 418
PG 1
WC Engineering, Mechanical
SC Engineering
GA 068IX
UT WOS:000239368200002
DA 2018-01-22
ER

PT J
AU Kajita, S
AF Kajita, Shuuji
TI Tribology in the walking humanoid robot
SO JOURNAL OF JAPANESE SOCIETY OF TRIBOLOGISTS
LA Japanese
DT Article
DE biped robot; humanoid robot; biped locomotion; ZMP; low friction
ID SLIP-RESISTANCE
C1 Natl Inst Adv Ind Sci & Technol, AIST Tsukuba Cent 2, Tsukuba, Ibaraki, Japan.
RP Kajita, S (reprint author), Natl Inst Adv Ind Sci & Technol, AIST Tsukuba Cent
2, 1-1 Umezono 1 Chome, Tsukuba, Ibaraki, Japan.
EM s.kajita@aist.go.jp
RI Kajita, Shuuji/M-5010-2016
OI Kajita, Shuuji/0000-0001-8188-2209
CR Brady RA, 2000, J BIOMECH, V33, P803, DOI 10.1016/S0021-9290(00)00037-3
Hanson JP, 1999, ERGONOMICS, V42, P1619, DOI 10.1080/001401399184712
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
REDFERN MS, 1994, ERGONOMICS, V37, P511, DOI 10.1080/00140139408963667
STRANDBERG L, 1983, ERGONOMICS, V26, P11, DOI 10.1080/00140138308963309
VUKOBRATOVIC M, 1972, Mathematical Biosciences, V15, P1, DOI 10.1016/0025-
5564(72)90061-2
NR 6
TC 0
Z9 0
U1 0
U2 2
PU JAPAN SOC TRIBOLOGISTS
PI TOKYO
PA KIKAI SHINKO KAIKAN NO 407-2 5-8 SHIBA-KOEN 3-CHOME MINATO-KU, TOKYO,
105, JAPAN
SN 0915-1168
J9 J JPN SOC TRIBOLOGIS
JI J. Jpn. Soc. Tribol.
PY 2006
VL 51
IS 6
BP 444
EP 449
PG 6
WC Engineering, Mechanical
SC Engineering
GA 068IX
UT WOS:000239368200007
DA 2018-01-22
ER

PT S
AU Watabe, H
Kawaoka, T
AF Watabe, Hirokazu
Kawaoka, Tsukasa
BE Gabrys, B
Howlett, RJ
Jain, LC
TI Autonomous action generation of humanoid robot from natural language
SO KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 1,
PROCEEDINGS
SE LECTURE NOTES IN ARTIFICIAL INTELLIGENCE
LA English
DT Article; Proceedings Paper
CT 10th International Conference on Knowledge-Based and Intelligent
Information and Engineering Systems
CY OCT 09-11, 2006
CL Bournemouth, ENGLAND
AB Communication between human and robot is necessary for an intelligent robot to
be active in daily life. Conversation is the basis of communication. Most of the
instructions given to a robot are related to some actions. This paper reports the
method to generate the action of the humanoid robot by conversation. For this
purpose, comprehension of action instruction written by natural language is
necessary. The system to understand natural language for generating action is
proposed. This system consists of a semantic understanding method to arrange input
information, the knowledge base of vocabulary related to actions, the knowledge
base of the parameter for robots to act, and association mechanism to handle a
word, which is not known. The system is capable of understanding many input
sentences by the association mechanism using Concept-Base and a degree of
association.
C1 Doshisha Univ, Dept Knowledge Engn & Comp Sci, Kyoto 6100394, Japan.
RP Watabe, H (reprint author), Doshisha Univ, Dept Knowledge Engn & Comp Sci, Kyoto
6100394, Japan.
CR Goldberg D. E, 1989, GENETIC ALGORITHMS S
Ishiguro H, 2003, SPRINGER TRAC ADV RO, V6, P179
Kojima K, 2001, P KES2001 2, P1590
SHINOHARA Y, 2003, NL153 IPSJ SIG, P89
Watabe H, 2001, P SMC2001, P877
NR 5
TC 2
Z9 2
U1 0
U2 0
PU SPRINGER-VERLAG BERLIN
PI BERLIN
PA HEIDELBERGER PLATZ 3, D-14197 BERLIN, GERMANY
SN 0302-9743
BN 3-540-46535-9
J9 LECT NOTES ARTIF INT
PY 2006
VL 4251
BP 882
EP 889
PG 8
WC Computer Science, Artificial Intelligence; Computer Science,
Interdisciplinary Applications
SC Computer Science
GA BFI87
UT WOS:000242122000106
DA 2018-01-22
ER

PT J
AU Carbone, G
Lim, HO
Takanishi, A
Ceccarelli, M
AF Carbone, G
Lim, HO
Takanishi, A
Ceccarelli, M
TI Stiffness analysis of biped humanoid robot WABIAN-RIV
SO MECHANISM AND MACHINE THEORY
LA English
DT Article
ID PARALLEL MANIPULATORS
AB In this paper, a humanoid robot named as WABIAN-RIV (WAseda BIpedal humANoid
Retined IV) is analyzed in terms of stiffness characteristics. This paper proposes
basic models and a formulation in order to deduce the stiffness matrix as a
function of the most important stiffness parameters of the WABIAN architecture. The
proposed formulation is useful for numerical estimation of stiffness performances.
An evaluation of stiffness performances is carried out by numerically implementing
the proposed formulation. An experimental validation of the numerical results is
also carried out on WABIAN-RIV humanoid robot. (c) 2005 Elsevier Ltd. All rights
reserved.
C1 Univ Cassino, DiMSAT, Lab Robot & Mechatron, I-03043 Cassino, Fr, Italy.
Kanagawa Univ, Dept Mech Engn, Kanagawa Ku, Yokohama, Kanagawa 2218686, Japan.
Waseda Univ, Humabnoid Robot Inst, Shinjuku Ku, Tokyo 1698555, Japan.
RP Ceccarelli, M (reprint author), Univ Cassino, DiMSAT, Lab Robot & Mechatron, Via
Biasio 43, I-03043 Cassino, Fr, Italy.
EM carbone@unicas.it; holim@ieee.org; takanisi@waseda.jp;
ceccarelli@unicas.it
RI Carbone, Giuseppe/J-5846-2012; ceccarelli, marco/J-5090-2017
OI Carbone, Giuseppe/0000-0003-0831-8358; ceccarelli,
marco/0000-0001-9388-4391
CR Artrit P, 2000, PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON CLIMBING
AND WALKING ROBOTS, P221
Carbone G, 2003, IEEE INT CONF ROBOT, P3654
Carbone G., 2003, P 12 INT WORKSH ROB
CARBONE G, 2003, THESIS U CASSINO CAS
CARBONE G, 2003, IEEE ASME INT C ADV, P962
CARONBE G, ELECT J COMPUTATIONA, V1
Ceccarelli M, 2002, MECH MACH THEORY, V37, P427, DOI 10.1016/S0094-
114X(02)00006-X
Chakarov D., 1998, J THEORETICAL APPL M, V28
DUFFY J, 1996, STATICS KINEMATICS A
FUKUDA T, 2001, IEEE ROBOTICS AUTOMA, V7, P66
GOSSELIN C, 1990, IEEE T ROBOTIC AUTOM, V6, P377, DOI 10.1109/70.56657
Gosselin C. M., 2002, International Journal of Robotics & Automation, V17, P17
HUANG T, 2001, P IEEE INT C ROB AUT, V4, P3280
KIM HY, 1995, MECH MACH THEORY, V30, P1269, DOI 10.1016/0094-114X(95)00017-S
Lim H., 2000, P 13 CISM IFTOMM S T, P295
LIM HO, 2001, P IEEE RSJ INT C INT, P494
Liu XJ, 2000, MECH MACH THEORY, V35, P1257, DOI 10.1016/S0094-114X(99)00072-5
Popov EP, 1968, INTRO MECH SOLIDS
Rivin E. I., 1988, MECH DESIGN ROBOTS
Rosheim M. E., 1994, ROBOT EVOLUTION DEV
SVININ M, 2001, P 2 WORKSH COMP KIN, P155
SVININ M, 2002, ELECT J COMPUTATIONA, V1
Tsai L-W., 1999, ROBOT ANAL MECH SERI
NR 23
TC 18
Z9 19
U1 0
U2 6
PU PERGAMON-ELSEVIER SCIENCE LTD
PI OXFORD
PA THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND
SN 0094-114X
J9 MECH MACH THEORY
JI Mech. Mach. Theory
PD JAN
PY 2006
VL 41
IS 1
BP 17
EP 40
DI 10.1016/j.mechmachtheory.2005.05.001
PG 24
WC Engineering, Mechanical
SC Engineering
GA 996AN
UT WOS:000234146900003
DA 2018-01-22
ER

PT J
AU Ishizuka, M
Prendinger, H
AF Ishizuka, M
Prendinger, H
TI Describing and generating multimodal contents featuring affective
lifelike agents with MPML
SO NEW GENERATION COMPUTING
LA English
DT Article
DE lifelike agent; multimodal contents; content description language;
emotion; affective computing
ID AFFECTIVE COMMUNICATION; HUMAN PHYSIOLOGY; CHARACTERS; LANGUAGE
AB In this paper, we provide an overview of our research on multimodal media and
contents using embodied lifelike agents. In particular we describe our research
centered on MPML (Multimodal Presentation Markup Language). MPML allows people to
write and produce multimodal contents easily, and serves as a core for integrating
various components and functionalities important for multimodal media. To
demonstrate the benefits and usability of MPML in a variety of environments
including animated Web, 3D VRML space, mobile phones, and the physical world with a
humanoid robot, several versions of MPML have been developed while keeping its
basic format. Since emotional behavior of the agent is an important factor for
making agents lifelike and for being accepted by people as an attractive and
friendly human-computer interaction style, emotion-related functions have been
emphasized in MPML. In order to alleviate the workload of authoring the contents,
it is also required to endow the agents with a certain level of autonomy. We show
some of our approaches towards this end.
C1 Univ Tokyo, Sch Informat Sci & Technol, Tokyo, Japan.
RP Ishizuka, M (reprint author), Univ Tokyo, Sch Informat Sci & Technol, Tokyo,
Japan.
CR Allen JF, 2001, AI MAG, V22, P27
ARAFA Y, P AAMAS 02 WORKSH EC
Badler NI, 2000, EMBODIED CONVERSATIONAL AGENTS, P256
Ball G, 2000, EMBODIED CONVERSATIONAL AGENTS, P189
BARAKONYI I, 2001, P MULT TECHN APPL C, P21
BECKER C, 2005, P 1 INT C AFF COMP I, P466
BOLLEGALA D, 2005, P 2 INT JOINT C NAT, P624
Cassell J., 2000, EMBODIED CONVERSATIO
CASSELL J, 2001, P SIGGRAPH 2001, P477
DECAROLIS B, 2002, P AAMAS 02 WORKSH EC
Descamps S, 2001, 21ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS
WORKSHOPS, PROCEEDINGS, P332, DOI 10.1109/CDCS.2001.918726
DESCAMPS S, 2001, P OZCHI 01 COMPUTER, P25
DESCAMPS S, 2001, P INT C INT MULT DIS, P9
Ekman P, 2002, FACIAL ACTION CODING
FELLBAUM C, 1982, WORDNET ELECT LEXICA
HAYESROTH B, 2004, LIFE LIKE CHARACTERS, P447
HUANG Z, 2002, P PRICAI 02 WORKSH L
Jatowt A, 2004, LECT NOTES ARTIF INT, V3202, P245
Kipp M, 2001, P WORKSH MULT COMM C, P9
Krenn B., 2004, P AISB 2004 S LANG S, P107
KUSHIDA K, 2005, P AAMAS 05 WORKSH 13, V13, P23
LANG PJ, 1995, AM PSYCHOL, V50, P372, DOI 10.1037//0003-066X.50.5.372
Lester J. C., 1997, P SIGCHI C HUM FACT, P359, DOI DOI 10.1145/258549.258797
Liu Hugo, 2003, P 8 INT C INT US INT, P125
MA C, 2005, P S CONV INF SUPP SO, P136
Ma CL, 2005, LECT NOTES COMPUT SC, V3784, P622
MARRIOTT A, 2002, P AAMAS 02 WORKSH EC
Masum S. M. A., 2005, P IEEE WIC ACM INT C, P246
MCCRAE RR, 1992, J PERS, V60, P175, DOI 10.1111/j.1467-6494.1992.tb00970.x
MEHRABIAN A, 1971, NONVERVAL COMMUNICAT
*MICR, 1998, DEV MICR AG
*MIT MED LAB, 2005, OP MIND COMM SENS
Mori J., 2003, P AAMAS 03 WORKSH W1, P58
MORI K, 2003, P INT C INT US INT I, P270
MURRAY IR, 1995, SPEECH COMMUN, V16, P369, DOI 10.1016/0167-6393(95)00005-9
NAKANO YI, 2004, P HUM LANG TECH C N, P153
NOZAWA Y, 2004, P 13 IEEE INT WORKSH
OKAZAKI N, 2002, WORKSH P CDROM VIRTU, P4
OKAZAKI N, 2004, WORKING NOTES 4 NTCI, V4, P436
OKAZAKI N, 2004, SYSTEMS COMPUTERS JA, P750
Ortony A., 1988, COGNITIVE STRUCTURE
Picard R. W., 2000, AFFECTIVE COMPUTING
PIWEK P, 2002, P AAMAS 02 WORKSH EC
Prendinger H, 2005, IEICE T INF SYST, VE88D, P2453, DOI 10.1093/ietisy/e88-
d.11.2453
Prendinger H, 2005, INT J HUM-COMPUT ST, V62, P231, DOI
10.1016/j.ijhcs.2004.11.009
Prendinger H, 2004, J VISUAL LANG COMPUT, V15, P183, DOI
10.1016/j.jvlc.2004.01.001
Prendinger H, 2003, LECT NOTES ARTIF INT, V2792, P283
Prendinger H., 2002, Proceedings of the First International Joint Conference on
Autonomous Agents and Multiagent Systems, P350
Prendinger H, 2002, APPL ARTIF INTELL, V16, P519, DOI 10.1080/08839510290030381
Prendinger H., 2001, Proceedings of the Fifth International Conference on
Autonomous Agents, P270, DOI 10.1145/375735.376307
Prendinger H, 2001, IEEE T SYST MAN CY A, V31, P465, DOI 10.1109/3468.952722
PRENDINGER H, 2004, LIFE LIKE CHARACTERS
Prendinger H., 2004, KI ZEITSCHRIFT GERMA, V1, P4
Prendinger H., 2004, P TUT RES WORKSH AFF, P53
PRENDINGER H, 2002, PRICAI 2002 TRENDS A, V2417, P571
PRENDINGER H, 2001, P 2 WORKSH ATT PERS, P6
PRENDINGER H, 2005, INT J APPL ARTIFICIA, V19, P267, DOI DOI
10.1080/08839510590910174
Nass C., 1996, MEDIA EQUATION PEOPL
Ruttkay Z., 2004, BROWS TRUST EVALUATI
Saeyor S, 2001, FIFTH INTERNATIONAL CONFERENCE ON INFORMATION VISUALISATION,
PROCEEDINGS, P563, DOI 10.1109/IV.2001.942111
SAEYOR S, 2003, P AAMAS 03 WORKSH W1, P68
SAEYOR S, 2003, INTELLIGENT VIRTUAL
SHAIKH M, 2005, P 2005 IEEE WIC ACM, P246
SMID K, 2005, P AISB 05 S CONV INF, P103
Stock O, 2005, TEXT SPEECH LANG TEC, V27, P1, DOI 10.1007/1-4020-3051-7
STONE M, 2004, ACM T GRAPHICS SIGGR, V23
TSUTSUI T, 2000, P CDROM WEB NET 2000
Valitutti A., 2004, PSYCHNOLOGY J, V2, P61
ZONG Y, 2000, P CDROM WORKSH MULT
NR 69
TC 9
Z9 10
U1 0
U2 0
PU SPRINGER
PI NEW YORK
PA 233 SPRING ST, NEW YORK, NY 10013 USA
SN 0288-3635
EI 1882-7055
J9 NEW GENERAT COMPUT
JI New Gener. Comput.
PY 2006
VL 24
IS 2
BP 97
EP 128
DI 10.1007/BF03037295
PG 32
WC Computer Science, Hardware & Architecture; Computer Science, Theory &
Methods
SC Computer Science
GA 025TV
UT WOS:000236290700001
DA 2018-01-22
ER

PT S
AU Aydemir, D
Iba, H
AF Aydemir, Deniz
Iba, Hitoshi
BE Runarsson, TP
Beyer, HG
Burke, E
MereloGuervos, JJ
Whitley, LD
Yao, X
TI Evolutionary behavior acquisition for humanoid robots
SO PARALLEL PROBLEM SOLVING FROM NATURE - PPSN IX, PROCEEDINGS
SE Lecture Notes in Computer Science
LA English
DT Article; Proceedings Paper
CT 9th International Conference on Parallel Problem Solving from Nature
(PPSN IX)
CY SEP 09-13, 2006
CL Univ Iceland, Reykjavik, ICELAND
SP Univ Iceland, Fac Engn, Univ Iceland, Sci Inst
HO Univ Iceland
AB This paper describes and analyzes a series of experiments to develop a general
evolutionary behavior acquisition technique for humanoid robots. The robot's
behavior is defined by joint controllers evolved concurrently. Each joint
controller consists of a series of primitive actions defined by a chromosome. By
using genetic algorithms with specifically designed genetic operators and novel
representations, complex behaviors are evolved from the primitive actions defined.
Representations are specifically tailored to be useful in trajectory generation for
humanoid robots. The effectiveness of the method is demonstrated by two
experiments: a handstand and a limbo dance behavioral tasks (leaning the body
backwards so as to pass under a fixed height bar).
C1 Univ Tokyo, Dept Frontier Informat, Grad Sch Frontier Sci, Tokyo, Japan.
RP Aydemir, D (reprint author), Univ Tokyo, Dept Frontier Informat, Grad Sch
Frontier Sci, Tokyo, Japan.
EM deniz@iba.k.u-tokyo.ac.jp; iba@iba.k.u-tokyo.ac.jp
CR CHERNOVA S, 2004, P IROS
GOLDBERG DE, 1993, PROCEEDINGS OF THE FIFTH INTERNATIONAL CONFERENCE ON GENETIC
ALGORITHMS, P56
HARVEY I, 1997, ROBOTICS AUTONOMOUS, P205
HARVEY I, 1993, P 2 INT C SIM AD BEH, P364
KAJITA S, 2003, P ICR, V4, P1620
KAMIO S, 2005, P 2005 IEEE RSJ INT, P1676
Kanehiro F, 2004, INT J ROBOT RES, V23, P155, DOI 10.1177/0278364904041324
Nishiwaki K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS
AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2684, DOI 10.1109/IRDS.2002.1041675
NOLFI S, 1998, CONNECT SCI, P10
NOLFI S, 1997, ROBOTICS AUTONOMOUS, P187
ROFER T, 2005, LECT NOTES ARTIF INT, P310
TAKAGI H, 1998, IEEE INT C INT ENG S, P1
Wagner GP, 1996, EVOLUTION, V50, P967, DOI 10.1111/j.1558-5646.1996.tb02339.x
YANASE T, 2006, IN PRESS P GEN EV CO
NR 14
TC 1
Z9 1
U1 0
U2 0
PU SPRINGER-VERLAG BERLIN
PI BERLIN
PA HEIDELBERGER PLATZ 3, D-14197 BERLIN, GERMANY
SN 0302-9743
BN 3-540-38990-3
J9 LECT NOTES COMPUT SC
PY 2006
VL 4193
BP 651
EP 660
PG 10
WC Computer Science, Theory & Methods
SC Computer Science
GA BFE29
UT WOS:000241446400066
DA 2018-01-22
ER

PT S
AU Mayer, NM
Asada, M
Guerra, RD
AF Mayer, N. Michael
Asada, Minoru
Guerra, Rodrigo da Silva
BE Bredenfeld, A
Jacoff, A
Noda, I
Takahashi, Y
TI Using a symmetric rotor as a tool for balancing
SO ROBOCUP 2005: ROBOT SOCCER WORLD CUP IX
SE LECTURE NOTES IN ARTIFICIAL INTELLIGENCE
LA English
DT Article; Proceedings Paper
CT 9th International Robocup Symposium
CY JUL 18-19, 2005
CL Osaka, JAPAN
ID PASSIVE-DYNAMIC WALKING; ROBOT
AB In the Humanoid Leagues balancing during walking and running is still the
biggest challenge for most of the teams. We present here some work in which a
dynamic walker is stabilised by using a fast heavy rotor, a gyro. The dynamics of a
symmetric, fast rotating gyro is different from that of a non-rotating solid body,
e.g. in the case of small disturbances it tends to keep the axes the same. Results
show that the rotor enhances the stability of the walking in the simulations. In a
model for an actuated robot the rotor is used as a reaction wheel, i.e. the pitch
of the robot is stabilised by accelerating and decelerating it. We see this method
-though it is not biologically inspired- as an intermediate step for learning
balancing in biped robots. The control algorithm responsible for balancing the
pitch is discussed in detail. The simulations show that, by using this kind of
stabilisation, movements like stand up, walk and jump are easily possible by using
open loop control for the legs, however high torques for the rotor are necessary.
Finally, a robot design that consists just of a trunk is presented.
C1 Osaka Univ, Dept Adapt Machine Syst, Osaka, Japan.
Osaka Univ, Handai Frontier Res Ctr, FRC, Osaka, Japan.
RP Mayer, NM (reprint author), Osaka Univ, Dept Adapt Machine Syst, Osaka, Japan.
EM norber@er.ams.eng.osaka-u.ac.jp; asada@er.ams.eng.osaka-u.ac.jp;
guerra@er.ams.eng.osaka-u.ac.jp
RI Mayer, Norbert Michael/F-7958-2010
OI Guerra, Rodrigo/0000-0003-4011-0901
CR CHRISTIAN JA, 2004, AIAA GUID NAV CONTR
Collins SH, 2001, INT J ROBOT RES, V20, P607, DOI 10.1177/02783640122067561
Ferreira ED, 2000, ADV ROBOTICS, V14, P459, DOI 10.1163/156855300741951
Garcia M.S., 1999, THESIS CORNELL U
HIRAKOSO H, 1996, JAPANESE TITLE, VA, P552
Kuo AD, 1999, INT J ROBOT RES, V18, P917, DOI 10.1177/02783649922066655
MAYER N, 2004, STABILIZING DYNAMIC
MAYER NM, 2004, P AROB OIT
MCGEER T, 1990, INT J ROBOT RES, V9, P62, DOI 10.1177/027836499000900206
Smith R., OPEN DYNAMICS ENGINE
SUGIHARA T, IEEE INT C ROB AUT, P1404
NR 11
TC 0
Z9 0
U1 1
U2 3
PU SPRINGER-VERLAG BERLIN
PI BERLIN
PA HEIDELBERGER PLATZ 3, D-14197 BERLIN, GERMANY
SN 0302-9743
BN 3-540-35437-9
J9 LECT NOTES ARTIF INT
PY 2006
VL 4020
BP 71
EP 80
PG 10
WC Computer Science, Artificial Intelligence; Robotics
SC Computer Science; Robotics
GA BEV59
UT WOS:000239623100007
DA 2018-01-22
ER

EF

Вам также может понравиться