Вы находитесь на странице: 1из 6

2015 IEEE International Conference on Robotics and Automation (ICRA)

Washington State Convention Center


Seattle, Washington, May 26-30, 2015

An Emotion Recognition Comparative Study of Autistic and


Typically-Developing Children using the Zeno Robot
Michelle J. Salvador1 , Sophia Silver2 , and Mohammad H. Mahoor1

Abstract— In this paper we present the results of our recent [6] [7], various Socially Assistive Robots (SAR) have been
study on comparing the emotion expression recognition abilities specifically developed to target facial expression recognition
of children diagnosed with high functioning Autism (ASD) with improvement in children with ASD. Since there is no general
those of typically developing (TD) children through use of a
humanoid robot, Zeno. In our study we investigated the effect consensus for what the appearance of a robot dedicated for
of incorporating gestures to the emotion expression prediction Autism therapy should look like, researchers have devel-
accuracy of both child groups. Although the idea that ASD oped robots with a wide degree of anthropomorphic realism
individuals suffer from general emotion recognition deficits is and facial complexity with reported success in engaging
widely assumed [1], we found no significant impairment in children with Autism [8]. Focusing particularly on robots
the general emotion prediction. However, a specific deficit in
correctly identifying Fear was found for children with Autism with emotionally expressive faces, robots with low facial
when compared to the TD children. Furthermore, we found complexities - such as KASPAR [9] and Tito [10] are
that gestures can significantly impact the prediction accuracy designed so that children are more comfortable and engaged
of both ASD and TD children in a negative or positive manner with the robot. The “anxiety and sensory overload” are
depending on the specific expression. Thus, the use of gestures reduced in the first interactions through a simplified face
for conveying emotional expressions by a humanoid robot in
a social skill therapy setting is relevant. The methodology and [6]. In contrast, FACE - developed in collaboration with the
experimental protocol are presented and additional discussion University of Pisa and Hanson Robotics (HR) [11] creates a
of the Zeno R-50 robot used is given. highly facially complex robot capable of displaying “nearly-
realistic” human facial expressions [12].
I. INTRODUCTION
As [1] notes, the idea that ASD individuals suffer from
Children with Autism spectrum disorder (ASD) experience general emotion recognition deficits is widely assumed and
deficits in verbal and non-verbal communication skills. This is further encouraged by publication bias. Therefore studies
social deficit disorder has lifetime prevalence and can be such as [12] and [13] that seek using SAR as a way of
extremely damaging to the child if not treated. Although teaching social skills to autistic children, may seek to develop
there is no known cause or cure for Autism, children who systems that improve overall emotion recognition through
have not received treatment can show signs of regression both facial expressions or body language (gestures) because
to the extent of becoming non-verbal [2]. Therefore, it is of this presumed deficit. However, detailed studies [14]
important to treat Autism early on so that social awareness have found that there is little evidence for general emotion
and communication skills can be learned to help the children recognition deficits and instead ASD individuals may have
function normally in society later on. trouble recognizing only a few particular emotions out of
According to the DSM-V ASD, symptoms can look like the basic 6 outlined by Paul Ekman [15]. Hence, we rec-
any of the following: Deficits in social communication and ognize that in order to seek a treatment for improvement in
social interactions such as social emotional reciprocity or emotion recognition for children with Autism, the deficiency
nonverbal communicative behaviors, among other symptoms. in comparison to neuro-typically developing (TD) children
Although impairment to recognizing emotion cues, such as must first be identified if such exists. Furthermore, if this
facial expressions, is not a diagnostic criterion for Autism, need for improvement exists then a qualitative measure of
it is assumed as a common deficit in individuals with ASD the deficiency should be found for each particular emotion.
[1] [3]. Nonetheless, emotion reading has been suggested as By possibly recognizing a measurable difference between TD
a primary difficulty in Autism since a deficit in emotional and ASD groups’ emotion recognition abilities, a measure of
expression recognition would prevent the child to sense improvement can be set as the desired outcome of the child
others’ emotional states [4] [5] and prevent the child’s social undergoing the SAR therapy treatment.
development. This paper presents a study testing how children with
Since it is recognized that many children with ASD Autism can recognize facial expressions illustrated by an
show a collective interest toward technology and robotics expressive robotic platform, compared to TDs. As suggested
1 Michelle Salvador (Michelle.Salvador@du.edu) and Mo- in [1], we hypothesized that a recognition deficit would be
hammad H. Mahoor (Mohammad.Mahoor@du.edu) are with the found for children with ASD only for the negative emotions
Department of Electrical and Computer Engineering, University of Denver, out of the 6 basic [15], such as anger, fear and disgust when
2390 S. York St. Denver, CO 80210, USA compared to the TD control group. Neutral is also evaluated.
2 Sophia Silver is with the Department of Psychology,
University of Denver, 2155 S. Race St. Denver, CO 80208, USA We hypothesized that no deficit would be found for happy.
sophsilver94@gmail.com Furthermore, we seek to test if the supplementation of facial

978-1-4799-6923-4/15/$31.00 ©2015 IEEE 6128


expressions with relevant body gestures can increment the total of eight DOF, three DOF for the neck, and 25 DOF for
emotion recognition in TD and children with ASD. Addi- the body.
tionally, this paper also serves as an evaluation of the Zeno The body uses a combination of Dynamixel RX-64 and
R-50’s capabilities and ability to portray facial expressions; RX-28 servos. RX-64s are used on the legs, hips and shoul-
since the robot’s novelty is primarily its expressive face. ders, and RX-28 servos for the waist. Using a combination
Though [13] has presented a protocol description for a of 3 Cirrus CS-101 STD 4g Micro Servos and 5 Hitec
child-Zeno interaction, implementation and results were not HS-65MG [18] the motors simulate facial expressions by
published. Therefore this pilot experiment also serves as a actuating the Frubber facial skin. Additionally the R-50 robot
first response of the R-50 in an interactive semi-autonomous is equipped with an HD camera in each motorized eye and
ASD child-robot therapy protocol. two microphones. A 9-axis inertial measurement unit is also
This paper is divided into six sections. Section II describes available along with bump sensors in each foot among other
the Zeno R-50 Specifications in both its hardware and available sensors. Provided in the R-50 model, is a Linux
software capabilities. A discussion of previous works done onboard computer with a 1 GHz Vortex 86DX x86 CPU
with the robot is also found here. Section III defines the processor with Wi-Fi and Ethernet connections [17]. Figure
methodology in creating the experiment protocol. The results 1 shows the Zeno R-50 robot used in this experiment seated
and discussion of the experiment are presented in Section IV in the study area as a participant would interact with it.
and V respectively. Finally, Section VI concludes this paper.
Future work is also examined in this section.
II. E XPRESSIVE ROBOTIC P LATFORM : Z ENO R-50
A. Use in Previous Work
In 2012, HR commercially released the Zeno R-50 to
provide a more realistic robot than other facially expressive
robots such as KASPAR, but less realistic than FACE [11].
To do this, HR created Zeno with a simplified 11 DOF face
and neck system, but used the same porous silicon material
(Frubber). By using such Frubber material, the robot’s facial
skin closely resembles human tissue in its physical properties Fig. 1. Left) Zeno R-50 Robot in study area. Top Right) Example images
and can achieve a more realistic mechanical actuation parallel from the Tilburg University emotion image set. Bottom Right) Example
face gestures used in the study (from left to right) HappyT1F, SadT1F, and
to human muscle movement [16]. AngryT1F
Initial proof-of-concept studies with Zeno conducted by
HR showed that 9 out of 10 mid-functioning children with C. Motion Controlling and Developed Software
ASD (ages nine and ten) responded positively to Zeno in an
Provided with Zeno R-50, the RoboWorkshop software
applied behavior analysis (ABA) labeling task. These results
allows editing and creation of Zeno .xml animation files
suggest that Zeno can be used as a promising platform for
containing servomotor position information. Automatic lip-
ASD research as the robot was positively accepted and en-
syncing text-to-speech software is also provided. However
gaging for the participants [11]. However, these experiments
no software, similar to Aldebaran’s Choregraphe [19], is
conducted by HR mainly tested the acceptance levels of the
available to program Zeno to perform complex behaviors.
children and did not use the robot’s facial expressive capacity
Nonetheless, HR has made RoboKind API available with
to engage the children in emotion recognition related tasks.
extensive Java libraries to communicate with Zeno. Using the
Few studies have been conducted so far exploiting Zeno’s
available RoboKind API and NetBeans IDE, we developed
facial expressivity capabilities since Zeno R-50 is a newly
a GUI controller, ZenoBrain, for real-time control of the
commercially available platform. However, one such study
Zeno’s control behavior sequences. ZenoBrain was thus
making use of the robot’s facial capacities is the ZECA
developed to allow a mixture of autonomy in Zeno’s behavior
project [13]. In searching for a robot platform which to base
and Wizard-of-Oz Interaction for manual operation as needed
their proposed Emotion Recognition protocol they selected
in the protocol.
the Zeno R-50 model for its cartoonish yet expressive face.
However, the protocol designed only plans to include chil- III. M ETHODS
dren with ASD for comparing those who will versus those
A. Development of Robot Emotion Animations
who will not use the robot in the experiments. The protocol
does not seek a way of measuring the assumed facial emotion We modeled and implemented the six basic + neutral
deficit as compared to typically developing children [13]. emotions on Zeno’s face and saved them as animations for
play back during the experimental sessions. Although Zeno
B. Zeno R-50 Hardware lacks many facial action units (AU) found on the human
Developed by Hanson Robotics, Zeno is a child-sized face, each emotion was modeled using the available AUs as
robot (0.635 meter max height, 6.5 Kg weight) with a described by Paul Ekman [15]. Table 1 shows the available
simplified expressive face [17]. The robot’s face contains a AUs used for each emotion and what motor name that

6129
motor corresponds to in the RoboWorkshop animation editor. parents signed a consent form. The protocol consisted of
Particularly for Disgust, since AU 9 (Nose Wrinkler) is listed eight stages each lasting a maximum of three minutes to
by EMFACS user manual as necessary, auxiliary AU 25 (lip maintain the interest level of the child. The experiment is
part) and AU 7 (lids tight) were used. To attempt to simulate presented to the child as a series of games where the child
AU 9, the eye brows on the robot were slightly lowered to should predict/guess what emotion the robot is trying to
simulate wrinkling of the upper skin over the nose. show. Figure 2 shows the room setup used in the experiment.
TABLE I
AVAILABLE AU S IN THE ROBOT FACE USED FOR E MOTION A NIMATIONS .
• Stage 1: Introduction of Zeno to Child
The first stage of the protocol involved a visual
Emotion Animation AUs Used Motor Name
Happy AU12 Smile Left/ Right
introduction of Zeno to the child and an explanation
Suprise AU1, AU5, AU26 Brow Pitch, Eyelids, Jaw of the rules of the game. To aid in playing the game,
Sadness AU1, AU15 Brow Pitch, Smile Left/Right they were given a set of seven plain laminated cards
Anger AU4, AU5, AU25 Brow Pitch, Eyelids, Jaw each containing one of the emotion labels (“happy”,
Disgust AU7, AU9, AU25 Eyelids, Brow Pitch, Jaw
Fear AU1, AU5, AU26 Brow Pitch, Eyelids, Jaw “sad”, “neutral” etc.). If the child felt comfortable,
the child was then left in the room alone with the robot.

Furthermore, three separate animations were created for • Stage 2: Child-Zeno Verbal Interaction
each emotion with AU intensity as the variant. The anima- The purpose of this stage is to further allow the
tions were labeled accordingly to distinguish the intensities. child to become comfortable with the robot through
For example: HappyTF1, HappyT2F, and HappyT3F. Cor- prompting a simple conversation. While the Child-Zeno
respondingly, the intensity for each animation was approxi- conversation occurred the microphone located inside
mately 30%, 60%, and 100% of the maximum motor range the room allowed the manual operator, assistant, and
allowed by Zeno’s internal software in all the motors used parent to listen. Depending on the responses given
to animate that expression. by the child the manual operator is able to type in a
In addition to facial activation animations, another se- response for Zeno to speak or perform from a library
quence of animations was designed to include body gesture of pre-created animations.
posing. As noted by [20] it is generally unknown which
specific gestures are needed to express the emotions since • Stage 3, 5, and 7: Emotion Recognition Game
humans can use many different body poses to express the In these stages, the main experiment game was carried
same emotion. Therefore, we chose to model the robot’s body out. For each game stage, a sequence of 13 emotions
gesture poses after images found in the Tilburg University was randomly shown by Zeno. After showing each
emotion image set [22]. The set was produced from 50 actors emotion, Zeno would resume a neutral pose and wait
asked to display each of the six basic emotions. Of the for the child’s response. When the child was ready to
image set, 18 images were chosen at random, three from each make a guess, they verbally told Zeno what emotion
different emotion. The matching between facial intensity and they thought the robot was displaying. From listening
gestures were combined at random. through the audio speakers, the manual operator
The lower body was not incorporated into Zeno’s body recorded the response as true or false. The manual
gesture animations since we have not developed software for operator took note of what emotion the child thought
Zeno’s lower body kinematics at the moment. Nonetheless, Zeno was showing if the child’s guess was incorrect
the animations based on these images were developed to from Zeno’s meant emotion. At the child’s request, the
closely mimic the expressions of the actors’ facial and upper animation was repeated by Zeno if necessary. At times
body expressions with the available degrees of freedom. when the child was indecisive about their guess, the
In total 37 animations were produced. Nineteen of these initial guess would be taken as the official guess.
consisted of only AU-based facial expressions, including
a neutral expression. The other 18 animations were each • Stage 4 and 6: Child-Zeno Interaction and Break
based on the gesture mimicry of 18 human actors. The time These stages provided needed breaks from the emotion
dynamic for all the animations was designed with a 0.5 recognition game to preserve the interest of the child
second linear ramp from neutral to the emotion’s full motor in later game stages. Also, the child could chose to
activation, a one second hold of pose, and a 0.5 second dance with Zeno as an entertaining joint activity.
linear ramp decline back to neutral. This was chosen for
its simplicity and to give sufficient time for the motors to • Stage 8: Zeno End Conversation
reach their full trajectories. After the final emotion recognition game is played
Zeno provides a simple closing farewell and thanks the
B. Experiment Protocol Stages
child for playing with “him”.
Our study was conducted at the University of Denver
where an IRB approval was obtained and all the children’s

6130
general combination of all 7 emotions, there is no statistical
difference between the groups, t(21)=0.27, p>.05.

Fig. 2. Room setup of experiment protocol

C. Description of Children Participants


Twenty-three children were recruited for the study. Of
these, 22 children between the ages of 7-13 (Age M=9.0)
completed the experiment. Eleven were classified as high Fig. 3. The average group accuracy percentage is shown for both ASD and
TD groups. Each bar represents the average of the group for that specific
functioning autistic by medical diagnosis (Age M=9.1, emotion of the ASD and TD study participants. Error bars are of +/- 1 SE.
SD=1.29) (Female=2, Male=9) and 11 as neuro-typical chil-
dren (Age M=8.8, SD=1.89) (Female=5, Male=6). Although
the gender distribution of these two groups differed, we
found this distribution to be reflective of the US population
statistics for each [21] [23].
In accepting participants, we ensured that the children in
our ASD group had a formal diagnosis from a doctor or
psychologist. The parents or guardians of the children were
also asked whether the child was at least about 80% verbal
and had the ability to read simple words. Individuals with
low functioning Autism were excluded from this study as
it required participants to sit in the room for three minutes
with the robot to complete each task.
As a control group, researchers recruited neuro-typical
functioning children who have never been diagnosed with
any kind of developmental or social deficit disorder. Addi- Fig. 4. The group accuracy for when only the face is used or when gestures
were incorporated is shown. For example, “ASD face” bars are the average
tionally, children who have siblings and or have been exposed ASD group percentage accuracy for when only facial expressions were used
to ASD in their homes were excluded from the study to by the robot to show an emotion. Error bars are of +/- 1 SE.
ensure a separation between the TD control group and ASD
Comparing the ASD group’s predictions of when the robot
individuals.
showed only a facial expression versus gesture expression,
the overall average accuracy dropped with the addition of
IV. RESULTS
gestures as also shown in Figure 3. Particularly Angry
Overall, the ASD group of children in our study achieved a showed that the addition of gestures significantly lowered
slightly higher accuracy average than the TD group for recog- the ASD group’s ability to recognize the emotion, (Face
nizing the emotions the robot was meant to convey (ASD av- ASD M=57.6, SD=28.8) (Gest ASD M =33.3%, SD=28.4%),
erage prediction accuracy percentage M=55.7, SD=12.2) (TD t(10) = 1.72, p<0.05. When adding gestures the ASD group
average prediction accuracy percentage M=54.5, SD=6.2) choose Disgust more frequently when Angry was shown by
(For the remainder of this paper the average prediction accu- the robot.
racy percentage of the correct predictions for the condition In contrast, Disgust (Face ASD M=24.2, SD=20.5) (Gest
will be reported in the parentheses). See Figure 3. However, ASD M=60.6, SD=31.2) and Fear (Face ASD M=24.2, SD)
when the general results are broken down into the predictions (Gest ASD M=36.3 SD=36.1) both showed a prediction
where the robot provided only the facial expression (ASD improvement for children with ASD when adding gestures.
M=56.8%, SD=15.2%) (TD M=54.9%, SD=10.2%) and the However, only Disgust showed a significant difference in
predictions where the robot used gestures in the expression the addition of gestures, t(10)=1.74 p<0.01. Performing
(ASD M=49.0%, SD=11.9%) (TD M=54.0%, SD=8.6%), the the same comparison for each emotion in the TD group,
TD group outperformed the ASD group when gestures were most emotion predictions also dropped in accuracy with the
added, Figure 4. In observing the ability of TD versus ASD addition of gestures. The exceptions to this drop were once
to correctly predict the robot’s emotional expression as a again Disgust (Face TD M=18.2, SD=21.9) (Gest TD M

6131
ognize the emotion. Since the robot lacks the ability to show
AU 9 (Nose Wrinkler), it is understandable why Disgust had
a low recognition rate for face expression. Nonetheless, with
gestures the accuracy increased from 24% to a 61% accuracy.
In contrast, in a study conducted by [25] using the same
image set as used by our study, they reported a 76% accuracy
Fig. 5. Shown is the confusion matrix for the recognition of six basic by normal adults for Disgust. Considering that the emotion
+ neutral emotions by the children diagnosed with ASD. Columns are the was displayed by a simplified Humanoid robot, the increase
emotion meant to be shown while rows are the emotion selected by ASD in accuracy of Disgust by supplementing emotion expression
participants.
with gestures shows great significance for the ASD group.
We note that the expression of Disgust would have improved
with giving the robot the ability to open its hands as if
pushing something away to mimic the human actors’ hand
gestures. Similarly, the TD group was able to greatly benefit
from the incorporation of gestures for Fear by incrementing
from a 12% to a 66% accuracy. Although, the ASD group
Fig. 6. This confusion matrix shows the TD children’s recognition of basic slightly improved with the addition of gestures for Fear,
emotions. Columns are the emotion meant to be shown while rows are the the increment is not comparable to that found by the TD
emotion selected by ASD participants. group. Hence, the only significant impairment that we found
in children with ASD when compared to TD children was
= 36.4, SD=33.2) and Fear (Face TD = 12.1, SD=16.03) the ability to recognize Fear when gesture expressions are
(Gest TD = 66.7, SD=24.6) with Fear showing a strong both used. These findings are in concordance with [1], where
improvement, t(10)=1.74, p<0.001. Also, Surprise did not data of over 980 Autistic participants from 48 papers were
show any change with the addition of gestures for the TD compiled, fear was the only emotion where ASD individuals
individuals (Face TD M=75.8, SD=32.1) (Gest TD M=75.8, showed marginal significant deficiencies.
SD=24.6). Concerning the implementation of the experimental pro-
For both ASD and TD groups, Happy showed a substantial tocol, varied initial reactions were elicited by Zeno from the
decrease in accuracy with the incorporation of gestures children in our study. All TD children showed an acceptance
[(Face ASD M=90.9) (Gest ASD M=63.6), t(10) =2.58, towards the robot when first being introduced to Zeno. In
p<0.01)] [(Face TD M=84.9)(Gest TD M=45.4) t(10)=3.17, contrast 3 out of 12 children with ASD showed a strong
p<0.01)]. For both groups the confusion of Surprise for the aversion. Of these, one refused to enter the room with Zeno,
shown Happy expression incremented. Although no other stating a dislike for its human like face. The other two
emotions showed a substantial difference between the TD participants showed fear towards the robot. However when
or ASD group’s ability to recognize emotions, Fear showed told that their parent could join them in the room, they agreed
a strong significant difference between the TD and ASD to participate in the study. In such cases the parents were
groups when gestures were added (Gest ASD M=36.4, present in the study room but were instructed not to give
SD=36.1)(Gest TD M=66.7 ,SD=24.6) , t(20)=-2.19, p<0.01. any verbal or nonverbal cues to their child about the emotion
Neutral is not taken into account in these cases as there was being asked by the robot. Nonetheless, the other children
no added gesture to the facial expression. with ASD showed a strong curiosity of the robot, particularly
its face. Sixteen of 22 participants who completed the study
V. DISCUSSION agreed to dance with the robot.
In our study we did not find a general significant impair-
ment in the ASD group for recognizing emotions. However, VI. CONCLUSION AND FUTURE WORKS
the addition or absence of gestures did produce a substan- This paper presented a comparative study testing how
tial difference between the two group’s interpretations of children diagnosed with ASD vs TD children can recognize
particular emotions. Although incorporating gestures slightly emotion expressions as shown by a humanoid robot. We
reduced the accuracy of both study groups’ guesses, addition also studied the effect of using body gestures by the robot
of gestures for Happy greatly lowered the guess accuracy in on the expression recognition predictions of children. In
both groups. In the [22] image data set used to model Zeno’s a group of 22 participants, we found that there was no
gestures, the human actors used for creating the set were significant impairment in the ASD group when compared to
asked to display their expression by associating a scenario TD children for recognizing the basic emotion expressions
to the emotion they were interpreting; “encountering an old on average. However, a strong impairment for the ASD
friend not seen in years and being very pleased to see” [22]. group in recognizing Fear was found when gestures were
Thus the actors may have mixed Happy and Surprise in their added versus the TD control group. Additional analysis of
gestures by pretending to meet their unexpected friend. the results comparing use of only facial vs incorporation
In contrast, the addition of gestures for Disgust signifi- of gestures showed significant reduction of the average
cantly improved the ability of the children with ASD to rec- prediction accuracy for Happy in both groups. Conversely,

6132
a significant improvement was shown for Disgust in the [7] S. Mohammad Mavadati, Howard Feng, Anibal Gutierrez, and Mo-
ASD group but not TD when gestures were added. Also, a hammad H. Mahoor, Comparing the Gaze Responses of Children
with Autism and Typically Developed Individuals in Human-Robot
significant improvement was shown in Fear recognition for Interaction, 2014 IEEE-RAS International Conference on Humanoid
the TD group but not the ASD. The improvement difference Robots.
was so dramatic that this proved to be the only deficit [8] B. Scassellati, H. Admoni, M. Mataric, “Robot Use in Autism Re-
search, Annual Review of Biomedical Engineering , 275-294, 2012.
significantly differentiating the ASD vs TD groups. Through [9] Wainer, J.; Robins, B.; Amirabdollahian, F.; Dautenhahn, K., “Using
these findings we demonstrated that a general impairment in the Humanoid Robot KASPAR to Autonomously Play Triadic Games
expression recognition for children diagnosed with Autism and Facilitate Collaborative Play Among Children With Autism,”
Autonomous Mental Development, IEEE Transactions on , vol.6, no.3,
should not be assumed when designing SAR therapies for pp.183,199, Sept. 2014.
them. Instead, each emotion should be evaluated individually. [10] A. Duquette, F. Michaud, and H. Mercier, “Exploring the use of a
These results using a humanoid robot therefore support the mobile robot as an imitation agent with children with low-functioning
Autism,” Autonomous Robot, vol. 24, pp. 147—157, 2007.
findings of several psychology papers such those compiled [11] D. Hanson, D. Mazzei, C. Garver, A. Ahluwalia, D. De Rossie, M.
by [1] and [26] that have shown that individuals with ASD Stevenson, K. Reynolds, “Realistic Humanlike Robots for Treatment
are successful in matching emotion expressions in still im- of ASD, Social Training, and Research; Shown to Appeal to Youths
with ASD, Cause Physiological Arousal, and Increase Human-to-
ages overall. Furthermore, we showed that the use of gestures Human Social Engagement,” PETRA, 1-7, 2012.
can significantly impact the prediction accuracy of both ASD [12] G. Pioggia, R. Igliozzi, M. Ferro, A. Ahluwalia, F. Muratori, and
and TD children in a negative or positive manner depend- D. DeRossi, “An android for enhancing social skills and emotion
recognition in people with Autism,” IEEE Trans. Neural Syst. Rehabil.
ing on the emotion. Also we demonstrated the successful Eng., vol. 13, no. 4, pp. 507, Dec. 2005.
capability of the Zeno R-50 to convey all six basic emotion [13] Costa, S.C.; Soares, F.O.; Pereira, AP.; Moreira, F., “Constraints in the
expressions when using a combination of facial and body design of activities focusing on emotion recognition for children with
ASD using robotic tools,” Biomedical Robotics and Biomechatronics
gestures. Although the children diagnosed with Autism did (BioRob), 2012 4th IEEE RAS & EMBS International Conference on,
not show any significant impairment for correctly labeling pp.1884,1889, 24-27 June 2012.
most expressions, future work should investigate whether the [14] Ozonoff, S., Pennington, B., & Rogers, S, “Are there emotion percep-
tion deficits in young autistic children? Journal of Child Psychology
children can truly identify the emotional meaning connected and Psychiatry and Allied Disciplines, vol. 31, 343—361, 1990.
to the label and visual cue. To test this, for example the [15] P. Ekman, “Are there basic emotions?,” Psychology. Rev., Vol. 99, no.
child can be asked to make up a short story explaining 3, pp.550—553, 1992.
[16] Hanson D., Priya S, “An Actuated Skin for Robotic Facial Expressions,
why the robot may be showing such emotion expression. As NSF Phase 1 Final Report”, National Science Foundation STTR award,
this may prove difficult for some children to do, the use of NSF 05-557, 2006-2007.
electro dermal activity to identify if the child associates the [17] Robot Shop. “Robo Kind Specifications” [Online] Available:
http://www.robotshop.com/media/files/PDF/hanson-robokind-
expression with an emotion can also be explored. Through specifications.pdf [Accessed: September 2014]
this work, we seek to contribute to the SAR and Autism [18] I. Ranatunga, J. Rajruangrabin, D. Popa, “Enhanced Therapeutic
research community for development of improved and more Interactivity using Social Robot Zeno,” PETRA 2011 , May 25-27,
2011.
targeted therapies for social emotion skill development. [19] Aldebran,“SDK, Simple software for developing your
robot”, http://www.aldebaran.com/en/robotics-solutions/robot-
software/development, [Accessed September, 2014]
VII. ACKNOWLEDGMENT [20] S. Costa, F. Soares, C. Santos, “Facial Expressions and Gestures to
Convey Emotions with a Humanoid Robot”, Social Robotics, Edition
This research is partially supported by grant IIS-1450933 of book, Vol. 8239, G. Herrmann, M. Pearson, A. Lenz, P. Bremner, A.
from the National Science Foundation. Spiers, U. Leonards, Ed Bristol, UK: Springer International Publishing,
p. 542-551, 2013.
[21] Centers for Disease Control and Prevention. “Autism Spec-
R EFERENCES trum Disorder (ASD) Data and Statistics.” [Online]. Available:
http://www.cdc.gov/ncbddd/Autism/data.html. [Accessed: September
[1] Uljarevic, M., & Hamilton, A, “Recognition of Emo- 2014]
tions in Autism: A Formal Meta-analysis.Journal of [22] B. Gelder, J. Van den stock, 1st Initial, “The Bodily Expressive
Autism and Developmental Disorders. Retrieved from: Action Stimulus Test (BEAST). Construction and Validation of a
http://link.springer.com/article/10.1007%2Fs10803-012-1695- Stimulus Basis for Measuring Perception of Whole Body Expression
5?LI=true#page-1 2012. of Emotions,” Frontiers in Psychology, Vol. 2, pp. 181, Aug 9, 2011.
[23] L. Howden, J. Rajruangrabin, M. Julie, “Age
[2] R. Goin-kochel, A. Esler, S. Kanne, V. Huss, “ Developmental
and Sex Composition: 2010,” 2010 Census Briefs,
Regression Among Children With Autism Spectrum Disorder: Onset,
http://www.census.gov/prod/cen2010/briefs/c2010br-03.pdf’ May,
Duration, and Effects on Functional Outcomes,” Research in Autism
2011.
Spectrum Disorders, Vol. 8, no. 7, 890-898, 2014.
[24] P. Ekman, W. Friesen, J. Hager, “ Facial Action Coding System: The
[3] K. D. Atwood, “Recognition of facial expressions of six emotions
Manual,
by children with specific language impairment,” BYU Department of
[25] K. Schindler, L. Gool, B. Gelder, “ Recognizing Emotions Expressed
Communications Disorders Masters Thesis, August 2006.
by Body Pose: a Biologically Inspired Neural Model,” Neural Net-
[4] Hobson, R. P. (1986b). “The autistic childs appraisal of expressions of works, 2008.
Emotion: A further study. Journal of Child Psychology and Psychiatry, [26] McIntish, D. N., “Facial Feedback Hypotheses: Evidence, Implica-
27, 671—680. tions, and Directions”, Motivation and Emotion, Vol. 20 no. 2, 121-
[5] Ekman, P. (1992). “Facial expressions of emotion: An old controversy 147, 1996.
and new findings. Philosophical Transactions of the Royal Society of
London. Series B, Biological sciences, 335, 6369.
[6] Ricks, D.J.; Colton, M.B., “Trends and considerations in robot-
assisted Autism therapy, Robotics and Automation (ICRA), 2010 IEEE
International Conference on, pp.4354,4359, 3-7 May 2010

6133

Вам также может понравиться