Advancements in Eyes-free Text Entry

For Keith Vertanen, the satisfaction of helping people with visual impairments is a byproduct of the challenge he seeks.

Vertanen’s research will offer more texting options not only to the blind community, but to the situationally impaired, too.

“My interest stemmed from sighted text entry research. The decoder (a touchscreen keyboard recognizer) is so accurate—we craved a bigger undertaking,” Vertanen explains. So he dug into literature and consulted with users who are blind to determine the need for better eyes-free text-entry options.

Existing accessibility solutions are slow. “There is a delay because users have to search for the target, key, or graphic and wait for audio feedback,” Vertanen says. By sliding a finger around on the touchscreen, the system announces via text-to-speech what their finger is over. When they find the element they want (it could be a key on a touchscreen keyboard), they double tap with their searching finger or they “split tap” by tapping with a second finger. The interaction technique was developed out of research at the University of Washington and is now a standard accessibility feature on iPhone and Android phones.

With Vertanen’s prototype, users with visual impairments imagine the size, position, and orientation of the Qwerty keyboard. They are asked to tap out letters, and eventually sentences. So far, users accurately tap their intended text on the imaginary display about 50 percent of the time.


There’s more work to be done. From this noisy data, Vertanen asks two questions: Can we develop new and improved algorithms to more accurately recognize the user’s intended text? And can we find ways users can provide the recognizer with a better signal while still allowing fast entry?

Vertanen’s research will offer more texting options not only to the blind community, but to the situationally impaired, too: “Those times when you cannot attend to your phone, like when you’re walking. Or perhaps we can treat your airline tray table as a touch-typing surface—but without a visual display.”

His research will also impact the devices of the future which may be designed without a text display.

“These are hard problems to solve. The other challenge is how to make error-correction efficient and pleasant. This is especially true if people are entering difficult text such as proper names or acronyms. A complementary approach is,  how do you design text-entry interfaces that allow users to be more explicit (albeit slower) about parts of their text they anticipate will be difficult to recognize,” Vertanen asks.

MIT Lincoln Laboratory contract for Tim Havens

Timothy Havens
Timothy Havens

Associate Professor Tim Havens received a $15,000 contract from MIT Lincoln Laboratory. Tim and his team will investigate signal processing for active phased array systems with simultaneous transmit and receive capability. While this capability offers increased performance in communications, radar, and electronic warfare applications, the challenging aspect is that a high-level of isolation must be achieved between the transmit and receive antennas in order to mitigate self-interference in the array. This is a half-year project. Timothy Schulz at ECE is the co-PI of the project. Excellent work Tim!

Women in Computing Day

image145224-rsideSixteen young women interested in computing careers will be on campus tomorrow for the fifth Women in Computing Day Visit.

The day-long program is a joint recruitment initiative between undergraduate admissions, computer science and electrical and computer engineering and is designed to increase awareness in the breadth and depth of computing careers while increasing diversity on campus.

Students will work in teams and independently to program a 3D virtual reality scene, build a working heart rate monitor, create a hologram and learn about embedded systems and programming by using computer code to control a robot. They will also hear about computing majors and minors and have the opportunity to talk with current students and faculty to learn more about Michigan Tech. Programming for parents and family members is also scheduled throughout the day.

Women in Computing Day is held biannually and attracts prospective students from across the Midwest. The fall program is tentatively scheduled for Oct. 27.

Making Data Retrieval More Efficient

When a user performs a search in social media, the request doesn’t stay within that platform. It calls upon the resources of a data center. “When someone sends a request to a data center, they want an immediate answer—they don’t want to wait,” Zhenlin Wang explains.

We designed upon open-source software and memcached that was adopted by Facebook and Twitter. They modified their approach to adapt to user demand. Our method beats their current practices.

Together with colleagues from Peking University, the University of Rochester, Wayne State University, and Michigan Tech, Wang looked to improve the internal structure,
theory, and algorithm of memory cache to make it more efficient.

This work is an offspring of his 2007 CAREER award.


“Currently, bulky disks store the data and are slow to react. When smaller, in-memory cache is used, the search is much faster,” he adds. “We designed upon open-source software and memcached that was adopted by Facebook and Twitter. They modified their approach to adapt to user demand. Our method beats their current practices,” Wang says.

“Imagine inviting 100 people over to your house for dinner, but only four will fit in your dining room. When we think about data resource management, it’s a similar scenario.”

Making Smart Vehicles Cognitive

Vehicle networks play an increasingly important role in promoting mobile applications, driving safety, network economy, and daily life. It is predicted there will be more than 50 million self-driving cars on the road by 2035; the sheer number and density of vehicles mean non-negligible resources for computing and communication in vehicular environments.


It is important to develop a better understanding of the fundamental properties of connected vehicle networks and to create better models and protocols for optimal network performance.

Equipped with a $221,797 NSF grant, Min Song is collaborating with Wenye Wang of North Carolina State University on “The Ontology of Inter-Vehicle Networking with Spatial-Temporal Correlation and Spectrum Cognition.” The pair are investigating the fundamental understanding and challenges of inter-vehicle networking, including foundation and constraints in practice that enable networks to achieve performance limits.

Vehicular communications are driven by the demands and enforcement of intelligent transportation system and standardization activities on DSRC and IEEE 802.11p/WAVE. Many applications, either time-sensitive or delay-tolerant, have been proposed and explored, including cooperative traffic monitoring and control, and recently extended for blind crossing, collision prevention, real-time detour routes computation, and many others. With the popularity of smart mobile devices, there is also an explosion of mobile applications in terrestrial navigation, mobile games, and social networking through Apple’s App Store, Google Play, and Windows.

A systematic investigation of connected vehicles will be done to gain scientific understanding and engineering guidelines critical to achieving optimal performance and desirable services. The merit of the project centers on the development of theoretical and practical foundations for services using inter-vehicle networks. The project starts from the formation of cognitive communication networks and moves on to the coverage of messages. The project further studies how resilient a network is under network dynamics, including vehicular movements, message dissemination, and
routing schemes.

The impact of the research is timely yet long-term, from fully realistic setting of channel modeling, to much-needed applications in vehicular environments, and to transforming performance analysis and protocol design for distributed, dynamic, and mobile systems. The outcome will advance knowledge and understanding not only in the field of vehicular networks, but also mobile ad-hoc networks, cognitive radio networks, wireless sensor networks, and future 5G networks.


High-Performance Wireless Mesh Networks

A wireless mesh network is a network topology in which each wireless node cooperatively relays data for the network. Song’s CAREER Award project developed distributed interference-aware broadcasting protocols for wireless mesh networks to achieve 100 percent reliability, low broadcasting latency, and high throughput. The problem of network wide broadcasting is a fundamental operation in ad-hoc mesh networks. Many broadcasting protocols have been developed for wireless ad-hoc networks with different focuses. However, these protocols assume a single-radio, single-channel, and single-rate network model and/or a generalized physical model and do not take into account the impact of interference. This project focuses on the design, analysis, and implementation of distributed broadcasting protocols for multi-radio, multi-channel, and multi-rate ad-hoc mesh networks.

Song’s work advances knowledge and understanding in the areas of wireless mesh networks, network optimization, information dissemination, and network performance analysis. Research findings allow the research community and network service providers to better understand the technical implications of heterogeneous networking technologies and cross-layer protocol support, and to create new technology needed for building future wireless mesh networks. The techniques developed in this project will have a broad impact on a spectrum of applications, including homeland security, military network deployment, information dissemination, and daily life. A deep understanding of interference and broadcasting will foster the deployment of more wireless mesh networks, and the development of better network protocols and network architecture. The problems studied are pragmatically and intellectually important and the solutions are critical to several areas such as modeling of wireless communication links, system performance analysis, and algorithms.

In the News

17An AP news article titled “Michigan Tech Students Teach Tech to the Inexperienced,” which features Michigan Tech’s BASIC (Building Adult Skills in Computing) program, Charles Wallace (CS), and Kelly Steelman (CLS), was published in the Charlotte Observer, Kansas City Star, Miami Herald, Washington Times, and many other news outlets across the country.

Drs. Wallace and Steelman were also featured on our blog post, Breaking Digital Barriers, last month highlighting their research.

Deep Inside the Mind Music Machine Lab

Cognitive science is a relatively new interdisciplinary field weaving neuroscience, psychology, linguistics, anthropology, and philosophy with computer science. Cognitive scientist Myounghoon “Philart” Jeon, whose nickname translates to “love art,” studies how the human mind reacts to technology. Inside a unique research lab at Michigan Tech, Philart teaches digital devices how to recognize and react to human emotion.

Art Meets Technology

Humans respond to technology, but can technology respond to humans? That’s the goal of the Mind Music Machine Lab. Together with Computer Science and Human Factors students, Philart looks at both physical and internal states of artists at work. He asks: What goes on in an artist’s brain while making art?


Reflective tracking markers are attached to performance artists—which have included dancers, visual artists, robots, and even puppies—and 12 infrared cameras visualize and sonify their movement. From the movements, the immersive Interactive Sonification Platform (ilSoP) detects four primary emotions: happy, sad, angry, and content. The result is a system that recognizes movement and emotion to generate real-time music and art.

Robotic Friends for Kids with Autism

Just as technology may not pick up subtle emotional cues, children with autism spectrum disorder (ASD) have difficulties in social interaction, verbal, and nonverbal communication. In this National Institutes of Health-funded project, Jeon uses technology in the form of interactive robots to provide feedback and stimuli to children with ASD.

These children have difficulty expressing emotions. Robots can help express and read emotion.

Studies indicate autistic children prefer simplistic animals and robots to complex humans. “These children have difficulty expressing emotions. And robots can help express and read emotion,” he says.

Robots are programmed to say phrases with different emotional inflections. Cameras and Microsoft Kinect detect facial expressions of humans and use sound cues to reinforce what an emotion is. All the while, parents and clinicians monitor the interaction between the child and robot.


Microdevice for Rapid Blood Typing without Reagents and Hematocrit Determination – STTR: Phase II

Laura BrownMichigan Tech Associate Professor Laura Brown (co-PI) and Robert Minerick (PI) of Microdevice Engineering, Inc. were granted a new award funded by the National Science Foundation regarding the broader impact/commercial potential of development of a portable, low cost blood typing and anemia screening device for use in blood donation centers, hospitals, humanitarian efforts and the military.

This device provides the ability to pre-screen donors by blood type and selectively direct the donation process (i.e. plasma, red cells) to reduce blood product waste and better match supply with hospital demand. This portable technology could also be translated to remote geographical locations for disaster relief applications.

The proposed project will advance knowledge across multiple fields including: microfluidics and the use of electric fields to characterize cells to identify the molecular expression on blood cells responsible for ABO-Rh blood type and rapidly measure cell concentration. This project includes the development of software for real time tracking of cell population motion and adapts advanced pattern recognition tools like machine learning and statistical analysis for identification of features and prediction of blood types.


Visualizing a Bright Future for Computer Science Education

Visualization is a process of presenting data and algorithms using graphics and animations to help people understand or see the inner workings. It’s the work of Ching-Kuang “CK” Shene. “It’s very fascinating work,” Shene says. “The goal is to make all hidden facts visible.”

Shene helps students and professionals learn the algorithm—the step-by-step formula—of software through visualization tools.

All 10 of Shene’s National Science Foundation-funded projects center on geometry, computer graphics, and visualization. Together with colleagues from Michigan Tech, he’s transferring the unseen world of visualization into the classroom.

Shene helps students and professionals learn the algorithm—the step-by-step formula—of software through visualization tools. His tools offer a demo mode so teachers can present an animation of the procedure to their class; a practice mode for learners to try an exercise; and a quiz mode to assess mastery of the concept. Tools Shene has implemented at Michigan Tech and the world over include DesignMentor for Bézier, B-Spline, and NURBS curve and surface design; ThreadMentor—visualization for multi-thread execution and synchronization—and CryptoMentor, a set of six tools to visualize cryptographic algorithms.


Shene and Associate Professor of Computer Science Jean Mayo are collaborating on two new tools—Access Control and VACCS. He hopes his lifetime of visualization work helps advance the field of computer science: “My goal is to visualize everything in computer science.”