Day: March 30, 2022

Graduate Research Colloquium, 2022

Each spring, Michigan Tech’s Graduate Student Government sponsors the Graduate Research Colloquium (GRC) Poster & Presentation Competition. The GRC is a unique opportunity for current graduate students to share their research with the University community and to gain experience in presenting that research to colleagues. During this year’s GRC a virtual mock conference will be set-up where presenters are broken down into various technical sessions, ranging from Advances in Modern Medicine and Health to Power and Energy, and everything in between.

Five Applied Cognitive Science and Human Factors (ACSHF) students will be competing in this year’s event on March 29-30.

Lamia Alam

Assessing Cognitive Empathy Elements within the Context of Diagnostic AI Chatbots

Empathy is an important element for any social relationship and it is also very important in patient-physician communication for ensuring the quality of care. There are many aspects and dimensions of empathy applicable in such communication. As Artificial Intelligence is being heavily deployed in healthcare, it is critical that there is a shared understanding between patients and the AI systems if patients are directly interacting with those systems. But many of the emotional aspects of empathy may not be achievable by AI systems at present and cognitive empathy is the one that can genuinely be implemented through artificial intelligence in healthcare. We need a better understanding of the elements of cognitive empathy and how these elements can be utilized effectively. In this research, the goal was to investigate whether empathy elements actually make a difference to improve user perception of AI empathy. We developed a scale “AI Cognitive Empathy Scale (AICES)” for that purpose and conducted a study where the experimental condition had both emotional and cognitive empathy elements together. The AICES scale demonstrated reasonable consistency, reliability, and validity, and overall, empathy elements improve the perceived empathy concern within diagnostic AI chatbots.

Betsy Lehman

Easy Does It: Ease of Generating Alternative Explanations As A Mediator Of Counterfactual Reasoning In Ambiguous Social Judgments

According to sensemaking theory (Klein et al., 2007), people must first question their theory of a situation before they can shift their perspective. Questioning one’s perspective may be critical in many situations, such as taking action against climate change, improving diversity and equity at work, or promoting vaccine adoption. However, research on how people question their theories is limited. Using counterfactual theory (Roese & Olson, 1995), we examined several factors and strategies affecting this part of the sensemaking process. Eighty participants generated explanations and predicted outcomes in five ambiguous social situations. Likelihood of an alternative outcome was the measure for questioning one’s frame. Two models of the data were created. Using path analysis, we compared fit between a base model (i.e., ease, malleable factors, and missing information) and a model based on counterfactual generation theory with ease as a mediator. Results indicated that the counterfactual theory model fit was better, indicating that ease of generation may be a critical mediator in the sensemaking process. This work contributes to research focused on understanding of the mechanisms of perspective shifts to support applications for system design and training, such as programs to reduce implicit bias.

Anne Linja

Examining Explicit Rule Learning in Cognitive Tutorials: Training learners to predict machine classification

Artificial Intelligence (AI)/Machine Learning (ML) systems are becoming more commonplace and relied upon in our daily lives. Decisions made by AI/ML systems guide our lives. For example, these systems might decide whether we get a loan, and the full-self driving car we’re sharing the road with even makes decisions. However, we may not be able to predict, or even know whether, or when these systems might make a mistake. Many Explainable AI (XAI) approaches have developed algorithms to give users a glimpse of the logic a system uses to come up with its output. However, increasing the transparency alone may not help users to predict the system’s decisions even though users are aware of the underlying mechanisms. One possible approach is Cognitive Tutorials for AI (CTAI; Mueller et al., 2021), which is an experiential method used to teach conditions under which the AI/ML system will succeed or fail. One specific CTAI technique involved teaching simple rules that could be used to predict performance; this was referred to as Rule Learning. This technique aims to identify rules that can help the user learn when the AI/ML system succeeds, the system’s boundary conditions, and what types of differences change the output of the AI system. To evaluate this method, I will report on a series of experiments in which we compared different rule learning approaches to find the most effective way to train users on these systems. Using the MNIST data set, this includes showing positive and negative examples in comparison to providing explicit descriptions of rules that can be used to predict the system’s output. Results suggest that although examples help people learn the rules, tutorials that provided explicit rule learning and provided direct example-based practice with feedback led people to best predict correct and incorrect classifications of an AI/ML system.

Tauseef Ibne Mamun

Connected Crossings: Examining Human Factors in a Field Study

Poor driver decision-making continues to be a challenge at Highway-Rail Grade Crossings (HRGC). One way to improve safety has been to introduce a new, in-vehicle warning system that communicates with the external HRGC warning systems. The system gives drivers different rail-crossing-related warnings (e.g., approaching crossing, train presence) depending on the vehicle location. In a rare field study, 15 experienced drivers drove a connected vehicle (Chevy Volt) and used the warning system on a 12-mile loop, then completed a semi-structured interview and usability survey. Results from the post-drive survey and interview are reported and provide a template for future usability assessments for field studies involving new technologies.

Lauren Monroe

Don’t throw a tempo tantrum: the effects of varying music tempo on vigilance performance and effective state

Vigilance tasks, or sustained attention tasks, involve an operator monitoring an environment for infrequent and random critical signals buried among more frequent neutral signals for an extended period of time. In addition to an observable decline in task engagement, task performance, and arousal over time, these tasks are also related to an increased subjective workload. Previously, music has been shown to have a positive impact on operator engagement and reaction times during sustained attention, however the differences between fast and slow tempo music on vigilance performance and subjective mood measures have not been studied. The present study (N=50) examined the effects of music played at different tempos on a selection of performance metrics and subjective measures of mood, engagement, and workload. Results indicated that varying the tempo of music did not have an effect on the decline in the correct detection of critical signals. There also was not a significant impact on measures of arousal and stress, but the fast tempo condition had a slightly positive impact on worry and engagement from pre to post task subjective measures.

For more information on our student and faculty research see: https://www.mtu.edu/cls/research/