Category Archives: News

The Making of a Citizen Science App

Astronomy is a citizen’s science. Its foundation is ordinary people who help answer serious scientific questions by providing vital data to the astronomical community. Nebulas, supernovas, and gamma ray sightings.

The availability of smartphones make collecting and sharing scientific data easier, faster, and more accurate.

These days former astronomy teacher Robert Pastel isn’t as interested in the stars, but he is serious about environmental science and using computer science—and smartphones—to capture more data from citizen scientists.

The availability of smartphones make collecting and sharing scientific data easier, faster, and more accurate. Pastel works with Alex Mayer, professor of civil and environmental engineering at Michigan Tech, students in both computer science and humanities, and scientists around the world to build mobile apps that feed real-world projects.

It starts in the summer, with scientists. “We reach out to them, or they find us. They share an idea and how citizen science can be used,” Pastel explains. “Then the app building begins; it’s about a two-year process.”

When the academic year rolls around, Pastel challenges his Human-Computer Interactions class to build the initial app prototype. In the following year, during Pastel’s Senior Design course, the app undergoes a makeover—from mobile app to a web-based tool. “By this time the scientists have likely changed their minds or solidified their ideas, and more changes are made,” Pastel adds.

34

An interactive mushroom mapper is the group’s most successful accomplishment to date. Hikers, bikers, or climbers—anyone with a smartphone and an affinity for fungi—capture a photo of the fungus, specify the type, describe the location, and hit submit. All via the app. The mushroom observation data reaches Eric Lilleskoz, a research ecologist with the United States Department of Agriculture. Mushroom Mapper has more than 250 observations from around the country. The app is also used for natural science education in local middle schools.

In addition to creating apps for citizen science, this NSF-supported effort has spawned student-initiated software development and offline apps.


Student Success in Computer Science

Redeveloping Michigan Tech’s introductory computer science courses has not been an easy feat. But for Leo Ureel, it’s meaningful work. “It’s about setting the right environment,” he says.

Humans learn best when we communicate with others. We’ve taken what we know works in industry and applied it to the classroom.

In the old model, instructors lectured, then assigned independent tasks. Teaching assistants graded the projects and returned them to students two or three weeks later. In a new model Ureel helped create, students work in groups of two to four to mimic workforce settings. “We are no longer just feeding information. Humans learn best when we communicate with others. We’ve taken what we know works in industry and applied it to the classroom,” Ureel explains.

With support from a Jackson Blended Learning Grant, Ureel implemented a web-based teaching assistant to tighten the feedback loop for students. Students submit code via a web portal and receive instant feedback. “They continue submitting work until they get it right. It’s mastery learning,” Ureel adds.

Authentic Learning Experiences

When first-year Michigan Tech student Lauren Brindley received a Google Ignite Computer Science grant to provide funding for 10 robots, Ureel knew it was an opportunity to provide a rich learning experience for students. “After graduation, it’s likely students will build robots in their careers; we’re providing real-world, hands-on learning from day one.” Ureel is developing inquiry curriculum where first-year computer science students will explore how to program the rover robots to move about the room.

33

Ureel’s next challenge is to assess each first-year student to ensure they’re in the proper course. “Nonmajors often come in with little to no programming experience; meanwhile computer science majors are off and running, ready for a challenge,” Ureel says. To help several hundred students determine the best courses, Ureel is creating an online course sample so students get a taste of
course content before making any decisions.

Preliminary data indicates Ureel’s efforts are working. “Engagement, retention, and grades are improving.”


Advancements in Eyes-free Text Entry

For Keith Vertanen, the satisfaction of helping people with visual impairments is a byproduct of the challenge he seeks.

Vertanen’s research will offer more texting options not only to the blind community, but to the situationally impaired, too.

“My interest stemmed from sighted text entry research. The decoder (a touchscreen keyboard recognizer) is so accurate—we craved a bigger undertaking,” Vertanen explains. So he dug into literature and consulted with users who are blind to determine the need for better eyes-free text-entry options.

Existing accessibility solutions are slow. “There is a delay because users have to search for the target, key, or graphic and wait for audio feedback,” Vertanen says. By sliding a finger around on the touchscreen, the system announces via text-to-speech what their finger is over. When they find the element they want (it could be a key on a touchscreen keyboard), they double tap with their searching finger or they “split tap” by tapping with a second finger. The interaction technique was developed out of research at the University of Washington and is now a standard accessibility feature on iPhone and Android phones.

With Vertanen’s prototype, users with visual impairments imagine the size, position, and orientation of the Qwerty keyboard. They are asked to tap out letters, and eventually sentences. So far, users accurately tap their intended text on the imaginary display about 50 percent of the time.

30

There’s more work to be done. From this noisy data, Vertanen asks two questions: Can we develop new and improved algorithms to more accurately recognize the user’s intended text? And can we find ways users can provide the recognizer with a better signal while still allowing fast entry?

Vertanen’s research will offer more texting options not only to the blind community, but to the situationally impaired, too: “Those times when you cannot attend to your phone, like when you’re walking. Or perhaps we can treat your airline tray table as a touch-typing surface—but without a visual display.”

His research will also impact the devices of the future which may be designed without a text display.

“These are hard problems to solve. The other challenge is how to make error-correction efficient and pleasant. This is especially true if people are entering difficult text such as proper names or acronyms. A complementary approach is,  how do you design text-entry interfaces that allow users to be more explicit (albeit slower) about parts of their text they anticipate will be difficult to recognize,” Vertanen asks.


Women in Computing Day

image145224-rsideSixteen young women interested in computing careers will be on campus tomorrow for the fifth Women in Computing Day Visit.

The day-long program is a joint recruitment initiative between undergraduate admissions, computer science and electrical and computer engineering and is designed to increase awareness in the breadth and depth of computing careers while increasing diversity on campus.

Students will work in teams and independently to program a 3D virtual reality scene, build a working heart rate monitor, create a hologram and learn about embedded systems and programming by using computer code to control a robot. They will also hear about computing majors and minors and have the opportunity to talk with current students and faculty to learn more about Michigan Tech. Programming for parents and family members is also scheduled throughout the day.

Women in Computing Day is held biannually and attracts prospective students from across the Midwest. The fall program is tentatively scheduled for Oct. 27.


Making Data Retrieval More Efficient

When a user performs a search in social media, the request doesn’t stay within that platform. It calls upon the resources of a data center. “When someone sends a request to a data center, they want an immediate answer—they don’t want to wait,” Zhenlin Wang explains.

We designed upon open-source software and memcached that was adopted by Facebook and Twitter. They modified their approach to adapt to user demand. Our method beats their current practices.

Together with colleagues from Peking University, the University of Rochester, Wayne State University, and Michigan Tech, Wang looked to improve the internal structure,
theory, and algorithm of memory cache to make it more efficient.

This work is an offspring of his 2007 CAREER award.

28

“Currently, bulky disks store the data and are slow to react. When smaller, in-memory cache is used, the search is much faster,” he adds. “We designed upon open-source software and memcached that was adopted by Facebook and Twitter. They modified their approach to adapt to user demand. Our method beats their current practices,” Wang says.

“Imagine inviting 100 people over to your house for dinner, but only four will fit in your dining room. When we think about data resource management, it’s a similar scenario.”



Making Smart Vehicles Cognitive

Vehicle networks play an increasingly important role in promoting mobile applications, driving safety, network economy, and daily life. It is predicted there will be more than 50 million self-driving cars on the road by 2035; the sheer number and density of vehicles mean non-negligible resources for computing and communication in vehicular environments.

 

It is important to develop a better understanding of the fundamental properties of connected vehicle networks and to create better models and protocols for optimal network performance.

Equipped with a $221,797 NSF grant, Min Song is collaborating with Wenye Wang of North Carolina State University on “The Ontology of Inter-Vehicle Networking with Spatial-Temporal Correlation and Spectrum Cognition.” The pair are investigating the fundamental understanding and challenges of inter-vehicle networking, including foundation and constraints in practice that enable networks to achieve performance limits.

Vehicular communications are driven by the demands and enforcement of intelligent transportation system and standardization activities on DSRC and IEEE 802.11p/WAVE. Many applications, either time-sensitive or delay-tolerant, have been proposed and explored, including cooperative traffic monitoring and control, and recently extended for blind crossing, collision prevention, real-time detour routes computation, and many others. With the popularity of smart mobile devices, there is also an explosion of mobile applications in terrestrial navigation, mobile games, and social networking through Apple’s App Store, Google Play, and Windows.

A systematic investigation of connected vehicles will be done to gain scientific understanding and engineering guidelines critical to achieving optimal performance and desirable services. The merit of the project centers on the development of theoretical and practical foundations for services using inter-vehicle networks. The project starts from the formation of cognitive communication networks and moves on to the coverage of messages. The project further studies how resilient a network is under network dynamics, including vehicular movements, message dissemination, and
routing schemes.

The impact of the research is timely yet long-term, from fully realistic setting of channel modeling, to much-needed applications in vehicular environments, and to transforming performance analysis and protocol design for distributed, dynamic, and mobile systems. The outcome will advance knowledge and understanding not only in the field of vehicular networks, but also mobile ad-hoc networks, cognitive radio networks, wireless sensor networks, and future 5G networks.

26

High-Performance Wireless Mesh Networks

A wireless mesh network is a network topology in which each wireless node cooperatively relays data for the network. Song’s CAREER Award project developed distributed interference-aware broadcasting protocols for wireless mesh networks to achieve 100 percent reliability, low broadcasting latency, and high throughput. The problem of network wide broadcasting is a fundamental operation in ad-hoc mesh networks. Many broadcasting protocols have been developed for wireless ad-hoc networks with different focuses. However, these protocols assume a single-radio, single-channel, and single-rate network model and/or a generalized physical model and do not take into account the impact of interference. This project focuses on the design, analysis, and implementation of distributed broadcasting protocols for multi-radio, multi-channel, and multi-rate ad-hoc mesh networks.

Song’s work advances knowledge and understanding in the areas of wireless mesh networks, network optimization, information dissemination, and network performance analysis. Research findings allow the research community and network service providers to better understand the technical implications of heterogeneous networking technologies and cross-layer protocol support, and to create new technology needed for building future wireless mesh networks. The techniques developed in this project will have a broad impact on a spectrum of applications, including homeland security, military network deployment, information dissemination, and daily life. A deep understanding of interference and broadcasting will foster the deployment of more wireless mesh networks, and the development of better network protocols and network architecture. The problems studied are pragmatically and intellectually important and the solutions are critical to several areas such as modeling of wireless communication links, system performance analysis, and algorithms.


In the News

17An AP news article titled “Michigan Tech Students Teach Tech to the Inexperienced,” which features Michigan Tech’s BASIC (Building Adult Skills in Computing) program, Charles Wallace (CS), and Kelly Steelman (CLS), was published in the Charlotte Observer, Kansas City Star, Miami Herald, Washington Times, and many other news outlets across the country.

Drs. Wallace and Steelman were also featured on our blog post, Breaking Digital Barriers, last month highlighting their research.


Deep Inside the Mind Music Machine Lab

Cognitive science is a relatively new interdisciplinary field weaving neuroscience, psychology, linguistics, anthropology, and philosophy with computer science. Cognitive scientist Myounghoon “Philart” Jeon, whose nickname translates to “love art,” studies how the human mind reacts to technology. Inside a unique research lab at Michigan Tech, Philart teaches digital devices how to recognize and react to human emotion.

Art Meets Technology

Humans respond to technology, but can technology respond to humans? That’s the goal of the Mind Music Machine Lab. Together with Computer Science and Human Factors students, Philart looks at both physical and internal states of artists at work. He asks: What goes on in an artist’s brain while making art?

23

Reflective tracking markers are attached to performance artists—which have included dancers, visual artists, robots, and even puppies—and 12 infrared cameras visualize and sonify their movement. From the movements, the immersive Interactive Sonification Platform (ilSoP) detects four primary emotions: happy, sad, angry, and content. The result is a system that recognizes movement and emotion to generate real-time music and art.

Robotic Friends for Kids with Autism

Just as technology may not pick up subtle emotional cues, children with autism spectrum disorder (ASD) have difficulties in social interaction, verbal, and nonverbal communication. In this National Institutes of Health-funded project, Jeon uses technology in the form of interactive robots to provide feedback and stimuli to children with ASD.

These children have difficulty expressing emotions. Robots can help express and read emotion.

Studies indicate autistic children prefer simplistic animals and robots to complex humans. “These children have difficulty expressing emotions. And robots can help express and read emotion,” he says.

Robots are programmed to say phrases with different emotional inflections. Cameras and Microsoft Kinect detect facial expressions of humans and use sound cues to reinforce what an emotion is. All the while, parents and clinicians monitor the interaction between the child and robot.

24


Visualizing a Bright Future for Computer Science Education

Visualization is a process of presenting data and algorithms using graphics and animations to help people understand or see the inner workings. It’s the work of Ching-Kuang “CK” Shene. “It’s very fascinating work,” Shene says. “The goal is to make all hidden facts visible.”

Shene helps students and professionals learn the algorithm—the step-by-step formula—of software through visualization tools.

All 10 of Shene’s National Science Foundation-funded projects center on geometry, computer graphics, and visualization. Together with colleagues from Michigan Tech, he’s transferring the unseen world of visualization into the classroom.

Shene helps students and professionals learn the algorithm—the step-by-step formula—of software through visualization tools. His tools offer a demo mode so teachers can present an animation of the procedure to their class; a practice mode for learners to try an exercise; and a quiz mode to assess mastery of the concept. Tools Shene has implemented at Michigan Tech and the world over include DesignMentor for Bézier, B-Spline, and NURBS curve and surface design; ThreadMentor—visualization for multi-thread execution and synchronization—and CryptoMentor, a set of six tools to visualize cryptographic algorithms.

20

Shene and Associate Professor of Computer Science Jean Mayo are collaborating on two new tools—Access Control and VACCS. He hopes his lifetime of visualization work helps advance the field of computer science: “My goal is to visualize everything in computer science.”