PhD Candidate Dylan Gaines, Computer Science, to Present Final Oral Examination


PhD candidate Dylan Gaines, Computer Science

PhD candidate Dylan Gaines, Computer Science, will present his final oral examination (defense) on Thursday, November 30, 2023, at 3 pm via Zoom webinar. The title of the defense is “An Ambiguous Technique for Nonvisual Text Entry.”

Gaines is advised by Associate Professor Keith Vertanen, Computer Science

Join the Zoom webinar.

Title

An Ambiguous Technique for Nonvisual Text Entry

Abstract

Text entry is a common daily task for many people, but it can be a challenge for people with visual impairments on virtual keyboards that lack physical key boundaries. In this thesis, we investigate using a small number of gestures to select from groups of characters to remove most or all dependence on touch locations. We leverage a predictive language model to select the most likely characters from the selected groups once a user completes each word.

Using a preliminary interface with six groups of characters based on a Qwerty keyboard, we find that users are able to enter text with no visual feedback at 19.1 words per minute (WPM) with a 2.1% character error rate (CER) after five hours of practice. We explore ways to optimize the ambiguous groups to reduce the number of disambiguation errors. We develop the FlexType interface with four character groups instead of six in order to remove all remaining location dependence and enable one-handed input. We compare optimized groups with and without constraining the group assignments to alphabetical order in another user study and find that users enter text with no visual feedback at 12.0 WPM with a 2.0% CER using the constrained groups after four hours of practice. There was no significant difference from the unconstrained groups.

We improve FlexType based on user feedback and tune the recognition algorithm parameters based on the study data. We conduct an interview study with 12 blind users to assess the challenges they encounter while entering text and solicit feedback on FlexType, and we further incorporate this feedback into the interface. We evaluate the improved interface in a longitudinal study with 16 blind users.