Archives—March 2017

Making Smart Vehicles Cognitive

Vehicle networks play an increasingly important role in promoting mobile applications, driving safety, network economy, and daily life. It is predicted there will be more than 50 million self-driving cars on the road by 2035; the sheer number and density of vehicles mean non-negligible resources for computing and communication in vehicular environments.


It is important to develop a better understanding of the fundamental properties of connected vehicle networks and to create better models and protocols for optimal network performance.

Equipped with a $221,797 NSF grant, Min Song is collaborating with Wenye Wang of North Carolina State University on “The Ontology of Inter-Vehicle Networking with Spatial-Temporal Correlation and Spectrum Cognition.” The pair are investigating the fundamental understanding and challenges of inter-vehicle networking, including foundation and constraints in practice that enable networks to achieve performance limits.

Vehicular communications are driven by the demands and enforcement of intelligent transportation system and standardization activities on DSRC and IEEE 802.11p/WAVE. Many applications, either time-sensitive or delay-tolerant, have been proposed and explored, including cooperative traffic monitoring and control, and recently extended for blind crossing, collision prevention, real-time detour routes computation, and many others. With the popularity of smart mobile devices, there is also an explosion of mobile applications in terrestrial navigation, mobile games, and social networking through Apple’s App Store, Google Play, and Windows.

A systematic investigation of connected vehicles will be done to gain scientific understanding and engineering guidelines critical to achieving optimal performance and desirable services. The merit of the project centers on the development of theoretical and practical foundations for services using inter-vehicle networks. The project starts from the formation of cognitive communication networks and moves on to the coverage of messages. The project further studies how resilient a network is under network dynamics, including vehicular movements, message dissemination, and
routing schemes.

The impact of the research is timely yet long-term, from fully realistic setting of channel modeling, to much-needed applications in vehicular environments, and to transforming performance analysis and protocol design for distributed, dynamic, and mobile systems. The outcome will advance knowledge and understanding not only in the field of vehicular networks, but also mobile ad-hoc networks, cognitive radio networks, wireless sensor networks, and future 5G networks.


High-Performance Wireless Mesh Networks

A wireless mesh network is a network topology in which each wireless node cooperatively relays data for the network. Song’s CAREER Award project developed distributed interference-aware broadcasting protocols for wireless mesh networks to achieve 100 percent reliability, low broadcasting latency, and high throughput. The problem of network wide broadcasting is a fundamental operation in ad-hoc mesh networks. Many broadcasting protocols have been developed for wireless ad-hoc networks with different focuses. However, these protocols assume a single-radio, single-channel, and single-rate network model and/or a generalized physical model and do not take into account the impact of interference. This project focuses on the design, analysis, and implementation of distributed broadcasting protocols for multi-radio, multi-channel, and multi-rate ad-hoc mesh networks.

Song’s work advances knowledge and understanding in the areas of wireless mesh networks, network optimization, information dissemination, and network performance analysis. Research findings allow the research community and network service providers to better understand the technical implications of heterogeneous networking technologies and cross-layer protocol support, and to create new technology needed for building future wireless mesh networks. The techniques developed in this project will have a broad impact on a spectrum of applications, including homeland security, military network deployment, information dissemination, and daily life. A deep understanding of interference and broadcasting will foster the deployment of more wireless mesh networks, and the development of better network protocols and network architecture. The problems studied are pragmatically and intellectually important and the solutions are critical to several areas such as modeling of wireless communication links, system performance analysis, and algorithms.

In the News

17An AP news article titled “Michigan Tech Students Teach Tech to the Inexperienced,” which features Michigan Tech’s BASIC (Building Adult Skills in Computing) program, Charles Wallace (CS), and Kelly Steelman (CLS), was published in the Charlotte Observer, Kansas City Star, Miami Herald, Washington Times, and many other news outlets across the country.

Drs. Wallace and Steelman were also featured on our blog post, Breaking Digital Barriers, last month highlighting their research.

Deep Inside the Mind Music Machine Lab

Cognitive science is a relatively new interdisciplinary field weaving neuroscience, psychology, linguistics, anthropology, and philosophy with computer science. Cognitive scientist Myounghoon “Philart” Jeon, whose nickname translates to “love art,” studies how the human mind reacts to technology. Inside a unique research lab at Michigan Tech, Philart teaches digital devices how to recognize and react to human emotion.

Art Meets Technology

Humans respond to technology, but can technology respond to humans? That’s the goal of the Mind Music Machine Lab. Together with Computer Science and Human Factors students, Philart looks at both physical and internal states of artists at work. He asks: What goes on in an artist’s brain while making art?


Reflective tracking markers are attached to performance artists—which have included dancers, visual artists, robots, and even puppies—and 12 infrared cameras visualize and sonify their movement. From the movements, the immersive Interactive Sonification Platform (ilSoP) detects four primary emotions: happy, sad, angry, and content. The result is a system that recognizes movement and emotion to generate real-time music and art.

Robotic Friends for Kids with Autism

Just as technology may not pick up subtle emotional cues, children with autism spectrum disorder (ASD) have difficulties in social interaction, verbal, and nonverbal communication. In this National Institutes of Health-funded project, Jeon uses technology in the form of interactive robots to provide feedback and stimuli to children with ASD.

These children have difficulty expressing emotions. Robots can help express and read emotion.

Studies indicate autistic children prefer simplistic animals and robots to complex humans. “These children have difficulty expressing emotions. And robots can help express and read emotion,” he says.

Robots are programmed to say phrases with different emotional inflections. Cameras and Microsoft Kinect detect facial expressions of humans and use sound cues to reinforce what an emotion is. All the while, parents and clinicians monitor the interaction between the child and robot.


Microdevice for Rapid Blood Typing without Reagents and Hematocrit Determination – STTR: Phase II

Laura Brown
Laura Brown

Michigan Tech Associate Professor Laura Brown (co-PI) and Robert Minerick (PI) of Microdevice Engineering, Inc. were granted a new award funded by the National Science Foundation regarding the broader impact/commercial potential of this project is the development of a portable, low cost blood typing and anemia screening device for use in blood donation centers, hospitals, humanitarian efforts and the military.  This device provides the ability to pre-screen donors by blood type and selectively direct the donation process (i.e. plasma, red cells) to reduce blood product waste and better match supply with hospital demand. This portable technology could also be translated to remote geographical locations for disaster relief applications.

The proposed project will advance knowledge across multiple fields including: microfluidics and the use of electric fields to characterize cells to identify the molecular expression on blood cells responsible for ABO-Rh blood type and rapidly measure cell concentration. This project includes the development of software for real time tracking of cell population motion and adapts advanced pattern recognition tools like machine learning and statistical analysis for identification of features and prediction of blood types.


Visualizing a Bright Future for Computer Science Education

Visualization is a process of presenting data and algorithms using graphics and animations to help people understand or see the inner workings. It’s the work of Ching-Kuang “CK” Shene. “It’s very fascinating work,” Shene says. “The goal is to make all hidden facts visible.”

Shene helps students and professionals learn the algorithm—the step-by-step formula—of software through visualization tools.

All 10 of Shene’s National Science Foundation-funded projects center on geometry, computer graphics, and visualization. Together with colleagues from Michigan Tech, he’s transferring the unseen world of visualization into the classroom.

Shene helps students and professionals learn the algorithm—the step-by-step formula—of software through visualization tools. His tools offer a demo mode so teachers can present an animation of the procedure to their class; a practice mode for learners to try an exercise; and a quiz mode to assess mastery of the concept. Tools Shene has implemented at Michigan Tech and the world over include DesignMentor for Bézier, B-Spline, and NURBS curve and surface design; ThreadMentor—visualization for multi-thread execution and synchronization—and CryptoMentor, a set of six tools to visualize cryptographic algorithms.


Shene and Associate Professor of Computer Science Jean Mayo are collaborating on two new tools—Access Control and VACCS. He hopes his lifetime of visualization work helps advance the field of computer science: “My goal is to visualize everything in computer science.”

Google visits Computer Science

Google grad student presentation 2017
During the week of February 28, 2017, two Googlers, Eric Dalquist, who received his BS in Computer Science from Michigan Tech in 2004, and Kurt Dresner, visited the Computer Science Department.

They met with graduate students, faculty and staff and hosted a tech talk on campus for students who wanted to learn more about Google and opportunities they have for graduate students.

Eric and Kurt also hosted two workshops, a Resume workshop where students found out what Google looks for in a resume, and a Preparing for Technical Interview workshop where students could learn what they need to do to prepare for a technical interview.

Faculty and students also met with Eric for a question and answer hour about “Life at Google”