Research Ovrview
Individuals with Autism Spectrum Disorder (ASD) have some deficits in verbal and non-verbal communication responses; including but not limited to motor control, emotional facial expressions, eye-gaze attention, and joint attention. Much evidence and research studies suggest that robots can trigger social responses of individuals with ASD more effectively than humans. Therefore at University of Denver, we aim to employ humanoid robots, such as NAO and ZENO, to investigate the efficiency of the customized robot based games to improve a few social skills and compare the responses of ASD groups with that of a Typically Developed (TD) control group. Bellow we introduce a few studies based on the robots:
NAO: Human Robot Interaction (social Interaction and intervention sessions):
In this research study we utilized a humanoid robot, called NAO, to measure the gaze responses, facial expression recognition and imitation and a few other capabilities of individuals (both TD and ASD) while interacting with the robot. NAO is a multifunctional humanoid robot which is capable of making different gestures, arm/leg posture and has several embedded functionalities ( e.g. text-to-speech, tactical sensor, face recognition, voice recognition capabilities). These features of NAO have helped us to design and build a socially communicative game setting for human-robot dyadic interaction. Based on the size of the robot and the friendly appearance of the robot we designed and conducted two set of games (Protocol A, Protocol B) to analyze the responses of ASD individuals and compare their interaction patterns with TD control group. This platform has enabled us to provide feedback and guidance to the children with ASD, in order to improve their behavioral and social responses.
Protocol A: NAO-Child Interaction (Gaze Analysis and Modeling)
In this study we measure gaze fixation and gaze shifting frequency of both TD and ASD groups as they interact with NAO. We designed two social games (i.e. NAO Spy and Find the Suspect) and recruited 21 subjects (i.e. 14 ASD and seven TD children) ages between 7-17 years old to interact with NAO. All sessions were video recorded using cameras and the videos were used for analysis. In particular, we manually annotated the gaze direction of children (i.e. gaze averted ‘0’ or gaze at robot ‘1’) in every frame of the videos within two social contexts (i.e. child speaking and child listening). More details about this study can be found in [PDF].
Besides using the statistical measures (i.e. gaze fixation and shifting), we statistically modeled the gaze responses of both groups (TD and ASD) using Markov models (e.g. Hidden Markov Model (HMM) and Variable-order Markov Model (VMM)). Using Markov based modeling allows us to analyze the sequence of gaze direction of ASD and TD groups for the aforementioned two social conversational sessions.
Protocol B: Therapeutic Games (Baseline and Intervention Sessions)
This protocol contains five robot-based games that are targeted for improving the social skills of children with high-functioning ASD. These games are aimed to practice social skills, such as verbal communication, joint attention, eye gaze attention, and facial expressions recognition/imitation. The objective of this protocol is to provide intervention sessions based on the needs of children diagnosed with autism. Therefore at the beginning of our experiment, each participant attends three “Baseline Sessions” to evaluate his/her social skill levels and behavioral responses. For each subject with ASD, if the base-line results for any of the five social games are lower than 80%, we run “Intervention Sessions”. In each intervention session, the focus is to improve the social skills of individuals. For example if a child cannot recognizing the facial expressions, the robot will provide some feedback on how every basic facial expression (e.g. Neutral, Happy, Sad, Angry) is forming and ask the child to practice to better recognize facial expressions. Meanwhile the child is interacting with the robot, the parents and the care givers can observe and listen to the entire sessions. This is an ongoing research and more details will be released soon.