Category Archives: Uncategorized



Weihua Zhou to Present Invited Talk at 2019 American Society of Nuclear Cardiology Conference

Weihua Zhou

Weihua Zhou (DataS), assistant professor of health informatics, will present an invited talk and give a poster presentation at the 2019 American Society of Nuclear Cardiology conference (ASNC), September 12-15, in Chicago, IL.

His talk, “Machine Learning for Automatic LV Segmentation and Volume Quantification,” will discuss the results of his recent research for the American Heart Association, “A new image-guided approach for cardiac resynchronization therapy.” (Project Number: 17AIREA33700016, PI: Weihua Zhou).


Benjamin Ong Awarded 25K for Parallel-in-time Integration Workshop

Benjamin Ong

Benjamin Ong (Math/ICC-DataS) is Principal Investigator on a one-year project that has received a $25,185 other sponsored activities grant from the National Science Foundation. The project is titled “Ninth Workshop on Parallel-In-Time Integration.”

The Ninth Workshop on Parallel-in-time Integration will take place June 8 – 12, 2020, at Michigan Tech. Ong (chair) and Jacob Schroder, assistant professor in the Dept. of Mathematics and Statistics at University of New Mexico, are heading the organizing committee for the workshop. Travel funding for early career researchers will be available. Application details and deadlines will be posted shortly on the event’s website at conferences.math.mtu.

Contact information:
ongbw@mtu.edu
906-487-3367

Invited speakers:

  • Professor Matthias Bolten, Bergische Universität Wuppertal
  • Professor Laurence Halpern, Université Paris 13
  • Professor George Karniadakis, Brown University
  • Professor Ulrich Langer, Johannes Kepler University Linz
  • Dr. Carol Woodward, Lawrence Livermore National Laboratory

The workshop is supported by:

  • Michigan Technological University, Department of Mathematical Sciences
  • Michigan Technological University, College of Science and Arts
  • Lawrence Livermore National Laboratory
  • Jülich Supercomputing Centre
  • FoMICS: The Swiss Graduate School in Computational Science

About the Workshop on Parallel-in-time Integration (from https://parallel-in-time.org/ and https://parallel-in-time.org/events/9th-pint-workshop/)

Computer models and simulations play a central role in the study of complex systems in engineering, life sciences, medicine, chemistry, and physics. Utilizing modern supercomputers to run models and simulations allows for experimentation in virtual laboratories, thus saving both time and resources. Although the next generation of supercomputers will contain an unprecedented number of processors, this will not automatically increase the speed of running simulations. New mathematical algorithms are needed that can fully harness the processing potential of these new systems. Parallel-in-time methods, the subject of this workshop, are timely and necessary, as they extend existing computer models to these next generation machines by adding a new dimension of scalability. Thus, the use of parallel-in-time methods will provide dramatically faster simulations in many important areas, such as biomedical applications (e.g., heart modeling), computational fluid dynamics (e.g., aerodynamics and weather prediction), and machine learning. Computational and applied mathematics plays a foundational role in this projected advancement.

The primary focus of the proposed parallel-in-time workshop is to disseminate cutting-edge research and facilitate scientific discussions on the field of parallel time integration methods. This workshop aligns with the National Strategic Computing Initiative (NCSI) objective: “increase coherence between technology for modeling/simulation and data analytics”. The need for parallel time integration is being driven by microprocessor trends, where future speedups for computational simulations will come through using increasing numbers of cores and not through faster clock speeds. Thus as spatial parallelism techniques saturate, parallelization in the time direction offers the best avenue for leveraging next generation supercomputers with billions of processors. Regarding the mathematical treatment of parallel time integrators, one must use advanced methodologies from the theory of partial differential equations in a functional analytic setting, numerical discretization and integration, convergence analyses of iterative methods, and the development and implementation of new parallel algorithms. Thus, the workshop will bring together an interdisciplinary group of experts spanning these areas.


NRI: Interactive Robotic Orchestration: Music-based Emotion and Social Interaction Therapy for Children with ASD

Researchers: Myounghoon “Philart” Jeon, PI

Sponsor: National Institutes of Health through the National Robotics Initiative

Amount of Support: $258,362

Abstract: The purpose of the research is to design novel forms of musical interaction combined with physical activities for improving social interactions and emotional responses of children with autism spectrum disorder (ASD). It is well known that a subject with ASD shows deficiency in emotional and social interactions. We propose to address two aspects of this issue: physio-musical stimulus for initiating engagement and empathizing for deepening interaction and thus enhancing a child’s emotional and social interactions. People with or without ASD between the ages of 5 and 10 may join this study.

Summary: In the United States, the rapid increase in the population of children with autism spectrum disorder (ASD) has revealed the deficiency in the realm of therapeutic accessibility for children with ASD in the domain of emotion and social interaction. There have been a number of approaches including several robotic therapeutic systems, but most of these efforts have been centered on speech interaction and task-based turn-taking scenarios. Unfortunately, the spectral diversity of ASD is so vast that many current approaches are, as novel and intriguing as they are, still insufficient to provide parameterized therapeutic tools and approaches.

To overcome this challenge, state-of-the-art techniques must still be developed to facilitate the autonomous interaction methods for robots to effectively stimulate the emotional and social interactivity of children. We focus on the recent studies that reveal strong relevance in premotor cortex among neural domains for music, emotion, and motor behaviors. We propose that musical interaction and activities can provide a new therapeutic domain for effective development in the children’s emotion and social interaction. Of key importance within this proposed work are providing capabilities for the robotic system to monitor the emotional and interactive states of children and provide effective musical and behavioral stimuli with respect to the emotion and social interaction of children. Major research questions include (1) What kinds of music-based signals and music-coordinated activities can play effective roles in encouraging emotional interaction and social engagement?, (2) How can robotic learning of human behaviors during musical activities increase the interaction between human and robot and reinforce the emotional and social engagement?, and (3) What metrics can be designed to effectively measure and evaluate the changes in emotional and social engagement through musical interaction activities?

Intellectual Merits: Designing and evaluating core techniques to fuse music, emotion, and socio-physical interaction should be invaluable to advancing affective computing, robotics, and engineering psychology theory as well as providing guidelines in developing an effective therapeutic robot companion. With this research endeavor, we will identify the most crucial features of musical components in stimulating emotional and interactive intentions of children and the most effective way to correlate those musical components with motion behaviors to maximize the children’s social engagement and development. The findings of the proposed work will also contribute to the design of interactive scenarios for natural and creative therapy with an individualized and systematic approach.

Broader Impacts: The successful development of our framework with the music-based approach has the capability of creating a new domain of pediatric therapy that can tremendously increase the abilities of robots to interact with children in a safe and natural manner. The novelty and significance of this approach correlates to therapeutic design for children with ASD, but the foundation for our interactive and adaptive reinforcement scheme can be extended to other pediatric populations and developmental studies. We plan to incorporate these knowledge and approaches into courses designed for robotics and affective computing. Furthermore, we plan to encapsulate many of these ideas into an “outreach” workshop for underrepresented students. Undergraduate research projects and demonstrations are expected to inspire the next generation of engineers and scientist with a new robot-coexistent world.

Main Results: We have developed an emotion-based robotic motion platform that can encapsulate spatial components as well as emotional dimensions into robotic movement behaviors. The Romo robot and DARwin-OP are first used for our project (Figure 1). The robotic characters and interfaces are also newly designed to accommodate characteristics of children with ASD while satisfying software design specifications. We have also developed a multi-modal analysis system to monitor and quantify physical engagements and emotional reactions of children through facial expression recognition app (Figure 2), Kinect based movement analysis system, voice analysis app, and a music analysis software. All systems are designed for mobile computing environments, so our developed therapy sessions can be installed in any place with proper space and connectivity. We have implemented a sonification server (Figure 3) with 600 emotional sound cues for 30 emotional keywords and a real-time sonification generation platform. We have also carried out extensive user study with Americans and Koreans to validate our sounds and compare cultural differences. The results show that there are some commonalities in terms of emotional sound preferences between the two countries and we can narrow down our emotional sound cues for further research. We have created robotic scenarios for a pilot study (Typical day of Senses & Four Seasons), and will expand the scenarios with diverse genres of music and motion library sets.

Publications

Robotic Sonification for Promoting Emotional and Social Interactions of Children with ASD

Musical Robots for Children with ASD Using a Client-Server Architecture


Novel In-vehicle Interaction Design and Evaluation

Researchers: Philart Jeon, PI

Sponsor: Hyundai Motor Company

Amount of Support: $130, 236

Duration of Support: 1 year

Purpose and Target: To investigate effectiveness of an in-vehicle gesture control system and culture-specific sound preference, Michigan Tech will design a prototype of in-air gesture system and auditory displays and conduct successive experiments using a medium fidelity driving simulator.

Technical Background: Touchscreens in vehicles have increased in popularity in recent years. Touchscreens provide many benefits over traditional analog controls like buttons and knobs. They also introduce new problems. Touchscreen use requires relatively high amounts of visual-attentional resources because they are visual displays. Driving is also a visually demanding task. Competition between driving and touchscreen use for visual-attentional resources has been shown to increase unsafe driving behaviors and crash risk

[1]. Driving researchers have been calling for new infotainment system designs which reduce visual demands on drivers [2]. Recent technological advances have made it possible to develop in-air gesture controls. In-air gesture controls, if supported with appropriate auditory feedback, may limit visual demands and allow drivers to navigate menus and controls without looking away from the road. Research has shown that accuracy of surface gesture movements can be increased with addition of auditory feedback [3]. However, there are many unanswered questions surrounding the development of an auditory supported in-air gesture-controlled infotainment system: What type of auditory feedback do users prefer? How can auditory feedback be displayed to limit cognitive load? What type of menu can offer an easily navigable interface for both beginners and experienced users? More importantly, do these displays reduce the eyes-off-road time and frequency of long off-road glances? Does the system improve driving safety overall when compared to touchscreens or analog interfaces? These are among the many questions that we attempt to address in this research project. Moreover, we want to explore if there is any cultural difference in auditory perception. As a starting point, HMC and MTU will design in-vehicle sounds and conduct an experiment with American populations.References

  • Horrey, and C. Wickens, “In-vehicle glance duration: distributions, tails, and model of crash risk” Transportation Research Record: Journal of the Transportation Research Board, vol. 2018, pp. 22-28, 2007.
  • Green, “Crashes induced by driver information systems and what can be done to reduce them,” In Sae Conf. Proc. SAE; 1999, 2000.
  • Hatfield, W. Wyatt, and J. Shea, “Effects of auditory feedback on movement time in a Fitts task,” Journal of Motor Behavior, vol. 42, no. 5, pp. 289-293, 2010.

Less is More: Investigating Abbreviated Text Input via a Game

Researchers: Keith Vertanen, PI, Assistant Professor, Computer Science

Sponsor: Google Faculty Research Award

Amount of Support: $47,219

Abstract: While there have been significant improvements to text input on touchscreen mobile devices, entry rates still fall far short of allowing the free-flowing conversion of thought into text. Such free-flowing writing has been estimated to require input at around 67 words-per-minute (wpm) [1]. This is far faster than current mobile text entry methods that have entry rates of 20–40 wpm. The approach we investigate in this project is to accelerate entry by allowing users to enter an abbreviated version of their desired text. Users are allowed to drop letters or entire words, relying on a sentence-based decoder to infer the unabbreviated sentence. This project aims to answer four questions: 1) What sorts of abbreviations, a priori, do users think they should use? 2) How do users change the degree and nature of their abbreviations in response to recognition accuracy? 3) Can we train users to drop parts of their text intelligently in order to aid the decoder? 4) Can we leverage the abbreviation behaviors observed to improve decoder accuracy?

To answer these questions, we adopt a data-driven approach; collecting lots of data, from many users, over long-term use. To this end, we will extend our existing multi-player game Text Blaster [2], deploying the game on the Android app store. Text Blaster’s game play encourages players to type sentences both quickly and accurately. Players adopting successful abbreviation strategies will gain a competitive advantage. Text Blaster provides a platform to not only investigate abbreviated input but also a host of other open research questions in text entry. Data collected via Text Blaster will be released as a public research resource.

References

[1] Kristensson, P.O. and Vertanen, K. (2014). The Inviscid Text Entry Rate and its Application as a Grand Goal for Mobile Text Entry. In MobileHCI 2014, 335-338.


Development of the Safety Assessment Technique for Take‐Over in Automated Vehicles

Empty cockpit of vehicle, HUD (Head Up Display) and digital speedometer, autonomous car

Researcher: Myounghoon “Philart” Jeon, PI

Sponsor: KATRI (Korea Automotive Testing & Research Institute)

Amount of Support: $450,703

Duration of Support: 4 years

Abstract: The goal of the project is to design and evaluate intelligent auditory interactions for improving safety and user experience in the automated vehicles. To this end, we have phased plans for four years. In year 1, we prepare for the driving simulator for the automated driving model. In year 2, we estimate and model driver states in automated vehicles using multiple sensing approaches. In year 3, we design and evaluate discrete auditory alerts for safety purpose, with a focus, specifically, on take-over scenarios. In year 4, we develop real-time sonification systems for overall user experience and make guidelines.

Intellectual Merits: The proposed work will significantly advance theory and practice in the design of natural and intuitive communication between a driver (or occupant) and an automated/autonomous vehicle. The results of the proposed research will not only contribute to design of the current vehicles, but also guide directions of social interaction in the automated vehicles in the near future. We will additionally obtain a theoretical framework to estimate and predict driver state and driving behavior. A more comprehensive understanding of the relationship between multiple sensing (e.g., neurophysiological and behavioral) data and driver states will ultimately be used to construct a more generic driving behavior model capable of combining affective elements and cognitive elements to positively influence safer driving. The proposed work will specifically contribute to the body of current literature in using auditory user interfaces for the automated vehicle contexts. Subsequently, this work will significantly advance theory and practice in interactive sonification design, affective computing, and driving psychology.

Broader Impacts: Applying novel in-vehicle auditory interactions to facilitate take-over process and eco-driving, and to mitigate distractions has the high potential to significantly decrease driving accidents and carbon footprints. Moreover, the entire program of the proposed research can be further developed for other vehicle situations. The proposed work offers an exciting simulated driving platform to integrate research with multidisciplinary STEM (Science, Technology, Engineering & Math) education. The principal investigator of the project will train graduate students, who will mentor undergraduates in the Mind Music Machine Lab at Michigan Tech. Driving simulators will be used for diverse hands-on curricula and courses. The PI will design driving simulation activities to teach courses as part of Michigan Tech’s Summer Youth Programs (SYP). Michigan Tech’s SYP has a strong longitudinal history of recruiting women, rural students from the Upper Peninsula of Michigan, and inner city minority students from the Detroit and Grand Rapids area. They also have a decade of assessment data that demonstrate SYP alumni are more likely to pursue STEM college degrees. The PI will also try to develop close partnerships and collaborations with other universities of the consortium and KATRI in Korea. Students and researchers can come to Michigan Tech and conduct and experience research projects together using cutting edge technologies. Research and education outcomes will be disseminated by the team, through the planned workshop on “New opportunities for in-vehicle auditory interactions in highly automated vehicles” at the International Conference on Auditory Display and AutomotiveUI Conference.


Hembroff Attends KEEN Workshop

Guy Hembroff, associate professor and director of the Medical Informatics graduate program (CC/CyberS), attended the three-day workshop, “Teaching With Impact – Innovating Curriculum With Entrepreneurial Mindset,” in Milwaukee, Wisc., this July.

The workshop, presented by KEEN, a network of engineering faculty working to instill within student engineers an entrepreneurial mindset, introduced faculty participants to the framework of entrepreneurially minded learning (EML), which is centered on curiosity, connections, and creating value.  Hembroff and other participants identified opportunities for EML integration into existing coursework, developed a personal approach to integrating EML within the course design process, and learned how to implement continual improvement of their own EML practice.

Visit https://engineeringunleashed.com for more information about KEEN.


DARPA Research Mentioned in AI Magazine Article

Shane Mueller

This month’s AI magazine includes the article “DARPA’s Explainable Artificial Intelligence Program,” which mentions Michigan Tech’s DARPA research. ICC member Shane Mueller is principal investigator of a 4-year, $255K DARPA XAI project.

The section of the article “Naturalistic Decision-Making Foundations of XAI” reads: “The objective of the IHMC team (which includes researchers from MacroCognition and Michigan Technological University) is to develop and evaluate psychologically plausible models of explanation and develop actionable concepts, methods, measures, and metrics for explanatory reasoning. The IHMC team is investigating the nature of explanation itself.”

Abstract: Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.

https://www.aaai.org/ojs/index.php/aimagazine/article/view/2850

https://doi.org/10.1609/aimag.v40i2.2850