Day: February 4, 2022

ACSHF Forum: Grad Student Presentations

The Applied Cognitive Science and Human Factors (ACSHF) Forum will be held from 2-3 p.m. Monday (Feb 7) virtually via Zoom. There will be two speakers: Anne Linja and Lauren Monroe, both ACSHF graduate students.

Linja will present “Examining Explicit Rule Learning in Cognitive Tutorials: Training learners to predict machine classification“.

Abstract:
Artificial Intelligence (AI)/Machine Learning (ML) systems are becoming more commonplace and relied upon in our daily lives. Decisions made by AI/ML systems guide our lives. For example, these systems might decide whether we get a loan, what our medical diagnoses are, and the full-self driving car we’re sharing the road with even makes decisions. However, we may not be able to predict, or even know whether, or when these systems might make a mistake.

Many Explainable AI (XAI) approaches have developed algorithms to give users a glimpse of the logic a system uses to come up with its output. However, increasing the transparency alone may not help users to predict the system’s decisions even though users are aware of the underlying mechanisms.

One possible approach is Cognitive Tutorials for AI (CTAI; Mueller, Tan, Linja et al., 2021), which is an experiential method used to teach conditions under which the AI/ML system will succeed or fail. One specific CTAI technique that was proposed involved teaching simple rules that could be used to predict performance; this was referred to as Rule Learning. This technique aims to identify rules that can help the user learn when the AI/ML system succeeds, fails, the system’s boundary conditions, and what types of differences change the output of the AI system. To evaluate this method, I will report on a series of experiments in which we compared different rule learning approaches to find the most effective way to train users on these AI/ML systems. Using the MNIST data set, this includes showing positive and negative examples in comparison to providing explicit descriptions of rules that can be used to predict the system’s output. Results suggest that although examples help people learn the rules (especially examples of errors), tutorials that provided explicit rule learning and provided direct example-based practice with feedback led people to best predict correct and incorrect classifications of an AI/ML system. I will discuss approaches to developing these tutorials for image classifiers and autonomous driving systems.


Monroe will present “Don’t throw a tempo tantrum: the effects of varying music tempo on vigilance performance and affective state“.

Abstract:
Vigilance tasks, or sustained attention tasks, involve an operator monitoring an environment for infrequent and random critical signals buried among more frequent neutral signals for an extended period of time. In addition to an observable decline in task engagement, task performance, and arousal over time, these tasks are also related to an increased subjective workload. Previously, music has been shown to have a positive impact on operator engagement and reaction times during sustained attention. The present study (N=50) examined the effects of music played at different tempos on a selection of performance metrics and subjective measures of mood, engagement, and workload. Results indicated that varying the tempo of music did not have an effect on the decline in the correct detection of critical signals. There also was not an observable impact on measures of engagement and stress but the fast tempo condition had a slightly significant positive impact on worry from pre to post task subjective measures.