Day: August 1, 2019

NRI: Interactive Robotic Orchestration: Music-based Emotion and Social Interaction Therapy for Children with ASD

Researchers: Myounghoon “Philart” Jeon, PI

Sponsor: National Institutes of Health through the National Robotics Initiative

Amount of Support: $258,362

Abstract: The purpose of the research is to design novel forms of musical interaction combined with physical activities for improving social interactions and emotional responses of children with autism spectrum disorder (ASD). It is well known that a subject with ASD shows deficiency in emotional and social interactions. We propose to address two aspects of this issue: physio-musical stimulus for initiating engagement and empathizing for deepening interaction and thus enhancing a child’s emotional and social interactions. People with or without ASD between the ages of 5 and 10 may join this study.

Summary: In the United States, the rapid increase in the population of children with autism spectrum disorder (ASD) has revealed the deficiency in the realm of therapeutic accessibility for children with ASD in the domain of emotion and social interaction. There have been a number of approaches including several robotic therapeutic systems, but most of these efforts have been centered on speech interaction and task-based turn-taking scenarios. Unfortunately, the spectral diversity of ASD is so vast that many current approaches are, as novel and intriguing as they are, still insufficient to provide parameterized therapeutic tools and approaches.

To overcome this challenge, state-of-the-art techniques must still be developed to facilitate the autonomous interaction methods for robots to effectively stimulate the emotional and social interactivity of children. We focus on the recent studies that reveal strong relevance in premotor cortex among neural domains for music, emotion, and motor behaviors. We propose that musical interaction and activities can provide a new therapeutic domain for effective development in the children’s emotion and social interaction. Of key importance within this proposed work are providing capabilities for the robotic system to monitor the emotional and interactive states of children and provide effective musical and behavioral stimuli with respect to the emotion and social interaction of children. Major research questions include (1) What kinds of music-based signals and music-coordinated activities can play effective roles in encouraging emotional interaction and social engagement?, (2) How can robotic learning of human behaviors during musical activities increase the interaction between human and robot and reinforce the emotional and social engagement?, and (3) What metrics can be designed to effectively measure and evaluate the changes in emotional and social engagement through musical interaction activities?

Intellectual Merits: Designing and evaluating core techniques to fuse music, emotion, and socio-physical interaction should be invaluable to advancing affective computing, robotics, and engineering psychology theory as well as providing guidelines in developing an effective therapeutic robot companion. With this research endeavor, we will identify the most crucial features of musical components in stimulating emotional and interactive intentions of children and the most effective way to correlate those musical components with motion behaviors to maximize the children’s social engagement and development. The findings of the proposed work will also contribute to the design of interactive scenarios for natural and creative therapy with an individualized and systematic approach.

Broader Impacts: The successful development of our framework with the music-based approach has the capability of creating a new domain of pediatric therapy that can tremendously increase the abilities of robots to interact with children in a safe and natural manner. The novelty and significance of this approach correlates to therapeutic design for children with ASD, but the foundation for our interactive and adaptive reinforcement scheme can be extended to other pediatric populations and developmental studies. We plan to incorporate these knowledge and approaches into courses designed for robotics and affective computing. Furthermore, we plan to encapsulate many of these ideas into an “outreach” workshop for underrepresented students. Undergraduate research projects and demonstrations are expected to inspire the next generation of engineers and scientist with a new robot-coexistent world.

Main Results: We have developed an emotion-based robotic motion platform that can encapsulate spatial components as well as emotional dimensions into robotic movement behaviors. The Romo robot and DARwin-OP are first used for our project (Figure 1). The robotic characters and interfaces are also newly designed to accommodate characteristics of children with ASD while satisfying software design specifications. We have also developed a multi-modal analysis system to monitor and quantify physical engagements and emotional reactions of children through facial expression recognition app (Figure 2), Kinect based movement analysis system, voice analysis app, and a music analysis software. All systems are designed for mobile computing environments, so our developed therapy sessions can be installed in any place with proper space and connectivity. We have implemented a sonification server (Figure 3) with 600 emotional sound cues for 30 emotional keywords and a real-time sonification generation platform. We have also carried out extensive user study with Americans and Koreans to validate our sounds and compare cultural differences. The results show that there are some commonalities in terms of emotional sound preferences between the two countries and we can narrow down our emotional sound cues for further research. We have created robotic scenarios for a pilot study (Typical day of Senses & Four Seasons), and will expand the scenarios with diverse genres of music and motion library sets.

Publications

Robotic Sonification for Promoting Emotional and Social Interactions of Children with ASD

Musical Robots for Children with ASD Using a Client-Server Architecture

Novel In-vehicle Interaction Design and Evaluation

Researchers: Philart Jeon, PI

Sponsor: Hyundai Motor Company

Amount of Support: $130, 236

Duration of Support: 1 year

Purpose and Target: To investigate effectiveness of an in-vehicle gesture control system and culture-specific sound preference, Michigan Tech will design a prototype of in-air gesture system and auditory displays and conduct successive experiments using a medium fidelity driving simulator.

Technical Background: Touchscreens in vehicles have increased in popularity in recent years. Touchscreens provide many benefits over traditional analog controls like buttons and knobs. They also introduce new problems. Touchscreen use requires relatively high amounts of visual-attentional resources because they are visual displays. Driving is also a visually demanding task. Competition between driving and touchscreen use for visual-attentional resources has been shown to increase unsafe driving behaviors and crash risk

[1]. Driving researchers have been calling for new infotainment system designs which reduce visual demands on drivers [2]. Recent technological advances have made it possible to develop in-air gesture controls. In-air gesture controls, if supported with appropriate auditory feedback, may limit visual demands and allow drivers to navigate menus and controls without looking away from the road. Research has shown that accuracy of surface gesture movements can be increased with addition of auditory feedback [3]. However, there are many unanswered questions surrounding the development of an auditory supported in-air gesture-controlled infotainment system: What type of auditory feedback do users prefer? How can auditory feedback be displayed to limit cognitive load? What type of menu can offer an easily navigable interface for both beginners and experienced users? More importantly, do these displays reduce the eyes-off-road time and frequency of long off-road glances? Does the system improve driving safety overall when compared to touchscreens or analog interfaces? These are among the many questions that we attempt to address in this research project. Moreover, we want to explore if there is any cultural difference in auditory perception. As a starting point, HMC and MTU will design in-vehicle sounds and conduct an experiment with American populations.References

  • Horrey, and C. Wickens, “In-vehicle glance duration: distributions, tails, and model of crash risk” Transportation Research Record: Journal of the Transportation Research Board, vol. 2018, pp. 22-28, 2007.
  • Green, “Crashes induced by driver information systems and what can be done to reduce them,” In Sae Conf. Proc. SAE; 1999, 2000.
  • Hatfield, W. Wyatt, and J. Shea, “Effects of auditory feedback on movement time in a Fitts task,” Journal of Motor Behavior, vol. 42, no. 5, pp. 289-293, 2010.

Less is More: Investigating Abbreviated Text Input via a Game

Researchers: Keith Vertanen, PI, Assistant Professor, Computer Science

Sponsor: Google Faculty Research Award

Amount of Support: $47,219

Abstract: While there have been significant improvements to text input on touchscreen mobile devices, entry rates still fall far short of allowing the free-flowing conversion of thought into text. Such free-flowing writing has been estimated to require input at around 67 words-per-minute (wpm) [1]. This is far faster than current mobile text entry methods that have entry rates of 20–40 wpm. The approach we investigate in this project is to accelerate entry by allowing users to enter an abbreviated version of their desired text. Users are allowed to drop letters or entire words, relying on a sentence-based decoder to infer the unabbreviated sentence. This project aims to answer four questions: 1) What sorts of abbreviations, a priori, do users think they should use? 2) How do users change the degree and nature of their abbreviations in response to recognition accuracy? 3) Can we train users to drop parts of their text intelligently in order to aid the decoder? 4) Can we leverage the abbreviation behaviors observed to improve decoder accuracy?

To answer these questions, we adopt a data-driven approach; collecting lots of data, from many users, over long-term use. To this end, we will extend our existing multi-player game Text Blaster [2], deploying the game on the Android app store. Text Blaster’s game play encourages players to type sentences both quickly and accurately. Players adopting successful abbreviation strategies will gain a competitive advantage. Text Blaster provides a platform to not only investigate abbreviated input but also a host of other open research questions in text entry. Data collected via Text Blaster will be released as a public research resource.

References

[1] Kristensson, P.O. and Vertanen, K. (2014). The Inviscid Text Entry Rate and its Application as a Grand Goal for Mobile Text Entry. In MobileHCI 2014, 335-338.

Development of the Safety Assessment Technique for Take‐Over in Automated Vehicles

Empty cockpit of vehicle, HUD (Head Up Display) and digital speedometer, autonomous car

Researcher: Myounghoon “Philart” Jeon, PI

Sponsor: KATRI (Korea Automotive Testing & Research Institute)

Amount of Support: $450,703

Duration of Support: 4 years

Abstract: The goal of the project is to design and evaluate intelligent auditory interactions for improving safety and user experience in the automated vehicles. To this end, we have phased plans for four years. In year 1, we prepare for the driving simulator for the automated driving model. In year 2, we estimate and model driver states in automated vehicles using multiple sensing approaches. In year 3, we design and evaluate discrete auditory alerts for safety purpose, with a focus, specifically, on take-over scenarios. In year 4, we develop real-time sonification systems for overall user experience and make guidelines.

Intellectual Merits: The proposed work will significantly advance theory and practice in the design of natural and intuitive communication between a driver (or occupant) and an automated/autonomous vehicle. The results of the proposed research will not only contribute to design of the current vehicles, but also guide directions of social interaction in the automated vehicles in the near future. We will additionally obtain a theoretical framework to estimate and predict driver state and driving behavior. A more comprehensive understanding of the relationship between multiple sensing (e.g., neurophysiological and behavioral) data and driver states will ultimately be used to construct a more generic driving behavior model capable of combining affective elements and cognitive elements to positively influence safer driving. The proposed work will specifically contribute to the body of current literature in using auditory user interfaces for the automated vehicle contexts. Subsequently, this work will significantly advance theory and practice in interactive sonification design, affective computing, and driving psychology.

Broader Impacts: Applying novel in-vehicle auditory interactions to facilitate take-over process and eco-driving, and to mitigate distractions has the high potential to significantly decrease driving accidents and carbon footprints. Moreover, the entire program of the proposed research can be further developed for other vehicle situations. The proposed work offers an exciting simulated driving platform to integrate research with multidisciplinary STEM (Science, Technology, Engineering & Math) education. The principal investigator of the project will train graduate students, who will mentor undergraduates in the Mind Music Machine Lab at Michigan Tech. Driving simulators will be used for diverse hands-on curricula and courses. The PI will design driving simulation activities to teach courses as part of Michigan Tech’s Summer Youth Programs (SYP). Michigan Tech’s SYP has a strong longitudinal history of recruiting women, rural students from the Upper Peninsula of Michigan, and inner city minority students from the Detroit and Grand Rapids area. They also have a decade of assessment data that demonstrate SYP alumni are more likely to pursue STEM college degrees. The PI will also try to develop close partnerships and collaborations with other universities of the consortium and KATRI in Korea. Students and researchers can come to Michigan Tech and conduct and experience research projects together using cutting edge technologies. Research and education outcomes will be disseminated by the team, through the planned workshop on “New opportunities for in-vehicle auditory interactions in highly automated vehicles” at the International Conference on Auditory Display and AutomotiveUI Conference.

Vertanen Teaches Workshop in Mumbai, India

Keith Vertanen

Keith Vertanen (CS/HCC), associate professor of computer science, traveled to Mumbai, India, in July to co-facilitate a three-day workshop on best practices for writing conference papers. The workshop was presented by ACM SIGCHI and its Asian Development C

ommittee, which works to increase its engagement with researchers and practitioners from Asia. The aim of the workshop was to encourage res

earchers from Asia to submit papers for the ACM CHI 2021 Conference on Human Factors in Computing Systems.

Workshop Students and Instructors

Vertanen, who is co-chair of the Usability Subcommittee for CHI 2020, presented lectures on paper writing and experimental design to 20 PhD candidates from various universities in India, Sri Lanka, and South Korea. Vertanen also presented a talk on his text entry research and served on an advisory panel that offered feedback to the PhD students on their research in a forum similar to a doctoral consortium. Also co-facilitating the workshop were faculty members from University of Central Lancashire, UK, KAIST University, South Korea, and Georgia Institute of Technology, Atlanta. Visit https://www.indiahci.org/sigchischool/paperCHI2021/ to learn more about the workshop.

Hembroff Attends KEEN Workshop

Guy Hembroff, associate professor and director of the Medical Informatics graduate program (CC/CyberS), attended the three-day workshop, “Teaching With Impact – Innovating Curriculum With Entrepreneurial Mindset,” in Milwaukee, Wisc., this July.

The workshop, presented by KEEN, a network of engineering faculty working to instill within student engineers an entrepreneurial mindset, introduced faculty participants to the framework of entrepreneurially minded learning (EML), which is centered on curiosity, connections, and creating value.  Hembroff and other participants identified opportunities for EML integration into existing coursework, developed a personal approach to integrating EML within the course design process, and learned how to implement continual improvement of their own EML practice.

Visit https://engineeringunleashed.com for more information about KEEN.