NRI: Interactive Robotic Orchestration: Music-based Emotion and Social Interaction Therapy for Children with ASD

Researchers: Myounghoon “Philart” Jeon, PI

Sponsor: National Institutes of Health through the National Robotics Initiative

Amount of Support: $258,362

Abstract: The purpose of the research is to design novel forms of musical interaction combined with physical activities for improving social interactions and emotional responses of children with autism spectrum disorder (ASD). It is well known that a subject with ASD shows deficiency in emotional and social interactions. We propose to address two aspects of this issue: physio-musical stimulus for initiating engagement and empathizing for deepening interaction and thus enhancing a child’s emotional and social interactions. People with or without ASD between the ages of 5 and 10 may join this study.

Summary: In the United States, the rapid increase in the population of children with autism spectrum disorder (ASD) has revealed the deficiency in the realm of therapeutic accessibility for children with ASD in the domain of emotion and social interaction. There have been a number of approaches including several robotic therapeutic systems, but most of these efforts have been centered on speech interaction and task-based turn-taking scenarios. Unfortunately, the spectral diversity of ASD is so vast that many current approaches are, as novel and intriguing as they are, still insufficient to provide parameterized therapeutic tools and approaches.

To overcome this challenge, state-of-the-art techniques must still be developed to facilitate the autonomous interaction methods for robots to effectively stimulate the emotional and social interactivity of children. We focus on the recent studies that reveal strong relevance in premotor cortex among neural domains for music, emotion, and motor behaviors. We propose that musical interaction and activities can provide a new therapeutic domain for effective development in the children’s emotion and social interaction. Of key importance within this proposed work are providing capabilities for the robotic system to monitor the emotional and interactive states of children and provide effective musical and behavioral stimuli with respect to the emotion and social interaction of children. Major research questions include (1) What kinds of music-based signals and music-coordinated activities can play effective roles in encouraging emotional interaction and social engagement?, (2) How can robotic learning of human behaviors during musical activities increase the interaction between human and robot and reinforce the emotional and social engagement?, and (3) What metrics can be designed to effectively measure and evaluate the changes in emotional and social engagement through musical interaction activities?

Intellectual Merits: Designing and evaluating core techniques to fuse music, emotion, and socio-physical interaction should be invaluable to advancing affective computing, robotics, and engineering psychology theory as well as providing guidelines in developing an effective therapeutic robot companion. With this research endeavor, we will identify the most crucial features of musical components in stimulating emotional and interactive intentions of children and the most effective way to correlate those musical components with motion behaviors to maximize the children’s social engagement and development. The findings of the proposed work will also contribute to the design of interactive scenarios for natural and creative therapy with an individualized and systematic approach.

Broader Impacts: The successful development of our framework with the music-based approach has the capability of creating a new domain of pediatric therapy that can tremendously increase the abilities of robots to interact with children in a safe and natural manner. The novelty and significance of this approach correlates to therapeutic design for children with ASD, but the foundation for our interactive and adaptive reinforcement scheme can be extended to other pediatric populations and developmental studies. We plan to incorporate these knowledge and approaches into courses designed for robotics and affective computing. Furthermore, we plan to encapsulate many of these ideas into an “outreach” workshop for underrepresented students. Undergraduate research projects and demonstrations are expected to inspire the next generation of engineers and scientist with a new robot-coexistent world.

Main Results: We have developed an emotion-based robotic motion platform that can encapsulate spatial components as well as emotional dimensions into robotic movement behaviors. The Romo robot and DARwin-OP are first used for our project (Figure 1). The robotic characters and interfaces are also newly designed to accommodate characteristics of children with ASD while satisfying software design specifications. We have also developed a multi-modal analysis system to monitor and quantify physical engagements and emotional reactions of children through facial expression recognition app (Figure 2), Kinect based movement analysis system, voice analysis app, and a music analysis software. All systems are designed for mobile computing environments, so our developed therapy sessions can be installed in any place with proper space and connectivity. We have implemented a sonification server (Figure 3) with 600 emotional sound cues for 30 emotional keywords and a real-time sonification generation platform. We have also carried out extensive user study with Americans and Koreans to validate our sounds and compare cultural differences. The results show that there are some commonalities in terms of emotional sound preferences between the two countries and we can narrow down our emotional sound cues for further research. We have created robotic scenarios for a pilot study (Typical day of Senses & Four Seasons), and will expand the scenarios with diverse genres of music and motion library sets.

Publications

Robotic Sonification for Promoting Emotional and Social Interactions of Children with ASD

Musical Robots for Children with ASD Using a Client-Server Architecture

Novel In-vehicle Interaction Design and Evaluation

Researchers: Philart Jeon, PI

Sponsor: Hyundai Motor Company

Amount of Support: $130, 236

Duration of Support: 1 year

Purpose and Target: To investigate effectiveness of an in-vehicle gesture control system and culture-specific sound preference, Michigan Tech will design a prototype of in-air gesture system and auditory displays and conduct successive experiments using a medium fidelity driving simulator.

Technical Background: Touchscreens in vehicles have increased in popularity in recent years. Touchscreens provide many benefits over traditional analog controls like buttons and knobs. They also introduce new problems. Touchscreen use requires relatively high amounts of visual-attentional resources because they are visual displays. Driving is also a visually demanding task. Competition between driving and touchscreen use for visual-attentional resources has been shown to increase unsafe driving behaviors and crash risk

[1]. Driving researchers have been calling for new infotainment system designs which reduce visual demands on drivers [2]. Recent technological advances have made it possible to develop in-air gesture controls. In-air gesture controls, if supported with appropriate auditory feedback, may limit visual demands and allow drivers to navigate menus and controls without looking away from the road. Research has shown that accuracy of surface gesture movements can be increased with addition of auditory feedback [3]. However, there are many unanswered questions surrounding the development of an auditory supported in-air gesture-controlled infotainment system: What type of auditory feedback do users prefer? How can auditory feedback be displayed to limit cognitive load? What type of menu can offer an easily navigable interface for both beginners and experienced users? More importantly, do these displays reduce the eyes-off-road time and frequency of long off-road glances? Does the system improve driving safety overall when compared to touchscreens or analog interfaces? These are among the many questions that we attempt to address in this research project. Moreover, we want to explore if there is any cultural difference in auditory perception. As a starting point, HMC and MTU will design in-vehicle sounds and conduct an experiment with American populations.References

  • Horrey, and C. Wickens, “In-vehicle glance duration: distributions, tails, and model of crash risk” Transportation Research Record: Journal of the Transportation Research Board, vol. 2018, pp. 22-28, 2007.
  • Green, “Crashes induced by driver information systems and what can be done to reduce them,” In Sae Conf. Proc. SAE; 1999, 2000.
  • Hatfield, W. Wyatt, and J. Shea, “Effects of auditory feedback on movement time in a Fitts task,” Journal of Motor Behavior, vol. 42, no. 5, pp. 289-293, 2010.

Less is More: Investigating Abbreviated Text Input via a Game

Researchers: Keith Vertanen, PI, Assistant Professor, Computer Science

Sponsor: Google Faculty Research Award

Amount of Support: $47,219

Abstract: While there have been significant improvements to text input on touchscreen mobile devices, entry rates still fall far short of allowing the free-flowing conversion of thought into text. Such free-flowing writing has been estimated to require input at around 67 words-per-minute (wpm) [1]. This is far faster than current mobile text entry methods that have entry rates of 20–40 wpm. The approach we investigate in this project is to accelerate entry by allowing users to enter an abbreviated version of their desired text. Users are allowed to drop letters or entire words, relying on a sentence-based decoder to infer the unabbreviated sentence. This project aims to answer four questions: 1) What sorts of abbreviations, a priori, do users think they should use? 2) How do users change the degree and nature of their abbreviations in response to recognition accuracy? 3) Can we train users to drop parts of their text intelligently in order to aid the decoder? 4) Can we leverage the abbreviation behaviors observed to improve decoder accuracy?

To answer these questions, we adopt a data-driven approach; collecting lots of data, from many users, over long-term use. To this end, we will extend our existing multi-player game Text Blaster [2], deploying the game on the Android app store. Text Blaster’s game play encourages players to type sentences both quickly and accurately. Players adopting successful abbreviation strategies will gain a competitive advantage. Text Blaster provides a platform to not only investigate abbreviated input but also a host of other open research questions in text entry. Data collected via Text Blaster will be released as a public research resource.

References

[1] Kristensson, P.O. and Vertanen, K. (2014). The Inviscid Text Entry Rate and its Application as a Grand Goal for Mobile Text Entry. In MobileHCI 2014, 335-338.

Development of the Safety Assessment Technique for Take‐Over in Automated Vehicles

Empty cockpit of vehicle, HUD (Head Up Display) and digital speedometer, autonomous car

Researcher: Myounghoon “Philart” Jeon, PI

Sponsor: KATRI (Korea Automotive Testing & Research Institute)

Amount of Support: $450,703

Duration of Support: 4 years

Abstract: The goal of the project is to design and evaluate intelligent auditory interactions for improving safety and user experience in the automated vehicles. To this end, we have phased plans for four years. In year 1, we prepare for the driving simulator for the automated driving model. In year 2, we estimate and model driver states in automated vehicles using multiple sensing approaches. In year 3, we design and evaluate discrete auditory alerts for safety purpose, with a focus, specifically, on take-over scenarios. In year 4, we develop real-time sonification systems for overall user experience and make guidelines.

Intellectual Merits: The proposed work will significantly advance theory and practice in the design of natural and intuitive communication between a driver (or occupant) and an automated/autonomous vehicle. The results of the proposed research will not only contribute to design of the current vehicles, but also guide directions of social interaction in the automated vehicles in the near future. We will additionally obtain a theoretical framework to estimate and predict driver state and driving behavior. A more comprehensive understanding of the relationship between multiple sensing (e.g., neurophysiological and behavioral) data and driver states will ultimately be used to construct a more generic driving behavior model capable of combining affective elements and cognitive elements to positively influence safer driving. The proposed work will specifically contribute to the body of current literature in using auditory user interfaces for the automated vehicle contexts. Subsequently, this work will significantly advance theory and practice in interactive sonification design, affective computing, and driving psychology.

Broader Impacts: Applying novel in-vehicle auditory interactions to facilitate take-over process and eco-driving, and to mitigate distractions has the high potential to significantly decrease driving accidents and carbon footprints. Moreover, the entire program of the proposed research can be further developed for other vehicle situations. The proposed work offers an exciting simulated driving platform to integrate research with multidisciplinary STEM (Science, Technology, Engineering & Math) education. The principal investigator of the project will train graduate students, who will mentor undergraduates in the Mind Music Machine Lab at Michigan Tech. Driving simulators will be used for diverse hands-on curricula and courses. The PI will design driving simulation activities to teach courses as part of Michigan Tech’s Summer Youth Programs (SYP). Michigan Tech’s SYP has a strong longitudinal history of recruiting women, rural students from the Upper Peninsula of Michigan, and inner city minority students from the Detroit and Grand Rapids area. They also have a decade of assessment data that demonstrate SYP alumni are more likely to pursue STEM college degrees. The PI will also try to develop close partnerships and collaborations with other universities of the consortium and KATRI in Korea. Students and researchers can come to Michigan Tech and conduct and experience research projects together using cutting edge technologies. Research and education outcomes will be disseminated by the team, through the planned workshop on “New opportunities for in-vehicle auditory interactions in highly automated vehicles” at the International Conference on Auditory Display and AutomotiveUI Conference.

Vertanen Teaches Workshop in Mumbai, India

Keith Vertanen

Keith Vertanen (CS/HCC), associate professor of computer science, traveled to Mumbai, India, in July to co-facilitate a three-day workshop on best practices for writing conference papers. The workshop was presented by ACM SIGCHI and its Asian Development C

ommittee, which works to increase its engagement with researchers and practitioners from Asia. The aim of the workshop was to encourage res

earchers from Asia to submit papers for the ACM CHI 2021 Conference on Human Factors in Computing Systems.

Workshop Students and Instructors

Vertanen, who is co-chair of the Usability Subcommittee for CHI 2020, presented lectures on paper writing and experimental design to 20 PhD candidates from various universities in India, Sri Lanka, and South Korea. Vertanen also presented a talk on his text entry research and served on an advisory panel that offered feedback to the PhD students on their research in a forum similar to a doctoral consortium. Also co-facilitating the workshop were faculty members from University of Central Lancashire, UK, KAIST University, South Korea, and Georgia Institute of Technology, Atlanta. Visit https://www.indiahci.org/sigchischool/paperCHI2021/ to learn more about the workshop.

Hembroff Attends KEEN Workshop

Guy Hembroff, associate professor and director of the Medical Informatics graduate program (CC/CyberS), attended the three-day workshop, “Teaching With Impact – Innovating Curriculum With Entrepreneurial Mindset,” in Milwaukee, Wisc., this July.

The workshop, presented by KEEN, a network of engineering faculty working to instill within student engineers an entrepreneurial mindset, introduced faculty participants to the framework of entrepreneurially minded learning (EML), which is centered on curiosity, connections, and creating value.  Hembroff and other participants identified opportunities for EML integration into existing coursework, developed a personal approach to integrating EML within the course design process, and learned how to implement continual improvement of their own EML practice.

Visit https://engineeringunleashed.com for more information about KEEN.

Susanta Ghosh is PI on $170K NSF Grant

Susanta Ghosh

Susanta Ghosh (ICC-DataS/MEEM/MuSTI) is Principal Investigator on a project that has received a $170,604 research and development grant from the National Science Foundation. The project is titled “EAGER: An Atomistic-Continuum Formulation for the Mechanics of Monolayer Transition Metal Dichalcogenides.” This is a potential 19-month project.

Dr. Ghosh is an assistant professor of Mechanical Engineering-Engineering Mechanics at Michigan Tech. Before joining the Michigan Tech College pof Engineering, Dr. Ghosh was an associate in research in the Pratt School of Engineering at Duke University; a postdoctoral scholar in the departments of Aerospace Engineering and Materials Science & Engineering at the University of Michigan, Ann Arbor; and a research fellow at the Technical University of Catalunya, Barcelona, Spain. His M.S. and Ph.D. degrees are from the Indian Institute of Science (IISc), Bangalore. His research interests include multi-scale solid mechanics, atomistic modeling, ultrasound elastography, and inverse problem and computational science.

Abstract: Two-dimensional materials are made of chemical elements or compounds of elements while maintaining a single atomic layer crystalline structure. Two-dimensional materials, especially Transition Metal Dichalcogenides (TMDs), have shown tremendous promise to be transformed into advanced material systems and devices, e.g., field-effect transistors, solar cells, photodetectors, fuel cells, sensors, and transparent flexible displays. To achieve broader use of TMDs across cutting-edge applications, complex deformations for large-area TMDs must be better understood. Large-area TMDs can be simulated and analyzed through predictive modeling, a capability that is currently lacking. This EArly-concept Grant for Exploratory Research (EAGER) award supports fundamental research that overcomes current challenges in large-scale atomistic modeling to obtain an efficient but reliable continuum model for single-layer TMDs containing billions of atoms. The model will be translational and will contribute towards the development of a wide range of applications in the nanotechnology, electronics, and alternative energy industries. The award will further support development of an advanced graduate-level course on multiscale modeling and organization of symposia in two international conferences on mechanics of two-dimensional materials. Experimental samples of TMDs contain billions of atoms and hence are inaccessible to the state-of-the-art molecular dynamics simulations. Moreover, existing crystal elastic models for surfaces cannot be applied to multi-atom thick 2D TMDs due to the presence of interatomic bonds across the atomic surfaces. The crystal elastic model aims to solve this problem by projecting all interatomic bonds onto the mid-surface to track their deformations. The actual deformed bonds will, therefore, be computed using the deformations of the mid-surface. Additionally, a technique will be derived to incorporate the effects of curvature and stretching of TMDs on their interactions with substrates. The model will be exercised to generate insights into the mechanical instabilities and the role of substrate interactions on them. The coarse-grained model will overcome the computational bottleneck of molecular dynamics models to simulate TMDs samples comprising billions of atoms. This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.

Bo Chen is PI of $200K NSF Research and Development Grant

Bo Chen (CS/CyberS) is Principal Investigator on a project that has received a $199,975 research and development grant from the National Science Foundation. The project is titled “EAGER: Enabling Secure Data Recovery for Mobile Devices Against Malicious Attacks.” This is a potential two-year project.

Abstract: Mainstream mobile computing devices like smart phones and tablets currently rely on remote backups for data recovery upon failures. For example, an iPhone periodically stores a recent snapshot to iCloud, and can get restored if needed. Such a commonly used “off-device” backup mechanism, however, suffers from a fundamental limitation that, the backup in the remote server is not always synchronized with data stored in the local device. Therefore, when a mobile device suffers from a malware attack, it can only be restored to a historical state using the remote backup, rather than the exact state right before the attack occurs. Data are extremely valuable for both organizations and individuals, and thus after the malware attack, it is of paramount importance to restore the data to the exact point (i.e., the corruption point) right before they are corrupted. This, however, is a challenging problem. The project addresses this problem in mobile devices and its outcome could benefit billions of mobile users.

A primary goal of the project is to enable recovery of mobile devices to the corruption point after malware attacks. The malware being considered is the OS-level malware which can compromise the OS and obtain the OS-level privilege. To achieve this goal, the project combines both the traditional off-device data recovery and a novel in-device data recovery. Especially, the following research activities are undertaken: 1) Designing a novel malware detector which runs in flash translation layer (FTL), a firmware layer staying between OS and flash memory hardware. The FTL-based malware detector ensures that data being committed to the remote server will not be tampered with by the OS-level malware. 2) Developing a novel approach which ensures that the OS-level malware is not able to corrupt data changes (i.e., delta) which have not yet been committed to the remote server. This is achieved by hiding the delta in the flash memory using flash storage’s special hardware features, i.e., out-of-place update and strong physical isolation. 3) Developing a user-friendly approach which can allow users to conveniently and efficiently retrieve the delta hidden in the flash memory for data recovery after malware attacks.

Link to an Unscripted article about related research at  https://www.mtu.edu/unscripted/stories/2018/march/how-to-speed-up-bare-metal-malware-analysis-and-better-protect-mobile-devices.html.

Ali Ebnenasir is Co-Author of Publication in ACM Transactions on Computational Logic

Ali Ebnenasir

An article co-authored by Ali Ebnenasir (SAS/CS) and Alex Klinkhamer, “Verification of Livelock-Freedom and Self-Stabilization on Parameterized Rings,” was recently published in ACM Transactions on Computational Logic.

Abstract: This article investigates the verification of livelock-freedom and self-stabilization on parameterized rings consisting of symmetric, constant space, deterministic, and self-disabling processes. The results of this article have a significant impact on several fields, including scalable distributed systems, resilient and self-* systems, and verification of parameterized systems. First, we identify necessary and sufficient local conditions for the existence of global livelocks in parameterized unidirectional rings with unbounded (but finite) number of processes under the interleaving semantics. Using a reduction from the periodic domino problem, we show that, in general, verifying livelock-freedom of parameterized unidirectional rings is undecidable (specifically, Π10-complete) even for constant space, deterministic, and self-disabling processes. This result implies that verifying self-stabilization for parameterized rings of self-disabling processes is also undecidable. We also show that verifying livelock-freedom and self-stabilization remain undecidable under (1) synchronous execution semantics, (2) the FIFO consistency model, and (3) any scheduling policy. We then present a new scope-based method for detecting and constructing livelocks in parameterized rings. The proposed semi-algorithm behind our scope-based verification is based on a novel paradigm for the detection of livelocks that totally circumvents state space exploration. Our experimental results on an implementation of the proposed semi-algorithm are very promising as we have found livelocks in parameterized rings in a few microseconds on a regular laptop. The results of this article have significant implications for scalable distributed systems with cyclic topologies.

https://dl.acm.org/citation.cfm?id=3326456&dl=ACM&coll=DL

doi: 10.1145/3326456