Month: July 2019

Appropriating Everyday Surfaces for Tap Interaction

Zachary Garavet and Siva Kakula

Researchers

Scott Kuhl (Associate Professor, CS)

Keith Vertanen (Assistant Professor, CS)

Sponsor: ECE Alumnus Paul Williams ’61

Amount of Support: $44,000

Duration of Support: 1 year

What if an everyday surface, like a table, could be transformed into a rich, interactive surface that can remotely operate things like computers, entertainment systems, and home appliances?

That’s what Michigan Tech Institute of Computing and Cybersystems (ICC) researchers Keith Vertanen and Scott Kuhl set out to do with a $44K seed grant from Electrical and Computer Engineering alumnus Paul Williams ’61.

Vertanen, assistant professor of computer science, and Kuhl, associate professor of computer science, are members of the ICC’s Center for Human-Centered Computing, which integrates art, people, design, technology, and human experience in the research of multiple areas of human-centered computing. They were assisted in this research by PhD candidate Siva Krishna Kakula, Computer Science, and undergraduate Zachary Garavet, Computer Engineering.

The team’s research goals were threefold: to create machine learning models that can precisely locate a user’s taps on a surface using only an array of inexpensive surface microphones; demonstrate the feasibility and precision of the models by developing a virtual keyboard interface on an ordinary wooden table; and conduct user studies to validate the system’s usability and performance.

The researchers are working on a related technical conference paper to present to their peers. Their outcomes included a prototype virtual keyboard that supports typing at rates comparable to a touchscreen device; possibly the first-ever acoustic sensing algorithm that infers a continuous two-dimensional tap location; and novel statistical models that quickly adapt to individual users and varied input surfaces.

Further, their results, hardware, and data sets can be applied to future collaborative work, and were used in the researchers’ $500K National Science Foundation proposal, “Text Interaction in Virtual and Augmented Environments,” which is under review.

Future applications of the research include enriched interactions in Virtual Reality (VR) and Augmented Reality (AR), compared to existing vision-only based sensing; and on-body interaction, like using your palm as an input surface.

Vertanen and Kuhl plan to continue this research, working to improve the accuracy of tap location inference, build richer interactions like swiping or tapping with multiple fingers, develop wireless sensor pods that can be quickly and easily deployed on any flat surface, and explore the display of virtual visual content on surfaces via Augmented Reality smartglasses.

View a video about this research at https://youtu.be/sF7aeXMfsIQ.

Seed grant donor Paul Williams is also the benefactor of the Paul and Susan Williams Center for Computer Systems Research, located on the fifth floor of the Electrical Energy Resources Center. The 10,000-square-foot, high-performance computing center—the home of the ICC—was established to foster close collaboration among researchers across multiple disciplines at Michigan Tech

The ICC, founded in 2015, promotes collaborative, cross-disciplinary research and learning experiences in the areas of cyber-physical systems, cybersecurity, data sciences, human-centered computing, and scalable architectures and systems. It provides faculty and students the opportunity to work across organizational boundaries to create an environment that mirrors contemporary technological innovation.

Five research centers comprise the ICC. The ICC’s 50 members, who represent 15 academic units at Michigan Tech, are collaborating to conduct impactful research, make valuable contributions in the field of computing, and solve problems of critical national importance.

Visit the ICC website at mtu.edu/icc. Contact the ICC at icc-contact@mtu.edu or 906-487-2518.

Download a summary of this research.

Development of a Low-Cost Marine Mobile Networking Infrastructure

Zhaohui Wang

Researchers:

Zhaohui Wang, Assistant Professor, ECE

Nina Mahmoudian, Adjunct Professor, ME-EM

Sponsor: ECE alumnus Paul Williams ’61
Amount of Support: $50,000
Duration of Support: 1 year

Underwater acoustic communication has been in use for decades, but primarily for military applications. In recent years, private sectors such as environmental monitoring, off-shore oil and gas exploration, and aquaculture have become interested in its possibilities.

But existing research about underwater acoustic communication networks often relies on human-operated surface ships or cost-prohibitive autonomous underwater vehicles (AUVs). And these cost barriers can limit academic research evaluation to computer simulations, constraining research innovation towards practical applications.

Recognizing the above gap, Michigan Tech Institute of Computing and Cybersystems (ICC) researchers Zhaohui Wang, assistant professor, Electrical and Computer Engineering, and Nina Mahmoudian, adjunct professor, Mechanical Engineering-Engineering Mechanics,  saw an opportunity to combine their areas of expertise: for Wang, underwater acoustic communications, for Mahmoudian, low-cost marine robotics and AUVs.

Also part of the research team were PhD student Li Wei, Electrical and Computer Engineering, and post-doc research engineer Barzin Moridian, Mechanical Engineering-Engineering Mechanics. The team also collaborated with scientists at Michigan Tech’s Great Lakes Research Center.

With a $50K seed grant from Electrical and Computer Engineering alumnus Paul Williams ’61, the team took the research beneath the surface to develop a low-cost marine mobile infrastructure and investigate the challenges and possible solutions in engineering a leading-edge AUV communication network.

They broke it down into three areas: the development of low-cost, high-modularity autonomous surface vehicles (ASVs), each equipped with a collection of sensors and serving as surrogates for AUVs; equipping each ASV with an acoustic modem and implementing communication and networking protocols to facilitate underwater communication among the vessels; and conducting field experiments to collect data about the fundamental challenges in mobile acoustic communications and networking among AUVs.

The team’s outcomes included two low-cost, autonomous, on-the-water boats; an experimental data set, data analysis, and preliminary results; a technical paper presented at the 2018 IEEE OES Autonomous Underwater Vehicle Symposium; and a marine mobile wireless networking infrastructure for use in continued research.

Just half of their seed grant has been used, and this summer Wang and Mahmoudian will work to improve the boats and the communications system, and conduct more field research. In addition, they are planning to write two National Science Foundation proposals to take their research even further.

View a summary of the research here.

Seed grant donor Paul Williams is also the benefactor of the Paul and Susan Williams Center for Computer Systems Research, located on the fifth floor of the Electrical Energy Resources Center. The 10,000-square-foot, high-performance computing center—the home of the ICC—was established to foster close collaboration among researchers across multiple disciplines at Michigan Tech

The ICC, founded in 2015, promotes collaborative, cross-disciplinary research and learning experiences in the areas of cyber-physical systems, cybersecurity, data sciences, human-centered computing, and scalable architectures and systems. It provides faculty and students the opportunity to work across organizational boundaries to create an environment that mirrors contemporary technological innovation.

Five research centers comprise the ICC. The ICC’s 50 members, who represent 15 academic units at Michigan Tech, are collaborating to conduct impactful research, make valuable contributions in the field of computing, and solve problems of critical national importance.

Visit the ICC website at mtu.edu/icc. Contact the ICC at icc-contact@mtu.edu or 906-487-2518.

Download a summary of the research.

Remotely Sensed Image Classification Refined by Michigan Tech Researchers

Thomas Oommen (left) and James Bialas

By Karen S. Johnson

View the press release.

With close to 2,000 working satellites currently orbiting the Earth, and about a third of them engaged in observing and imaging o

ur planet,* the sheer volume of remote sensing imagery being collected and transmitted to the surface is astounding. Add to this images collected by drones, and the estimation grows quite possibly beyond the imagination.

How on earth are science and industry making sense of it all? All of this remote sensing imagery needs to be converted into tangible information so it can be utilized by government and industry to respond to disasters and address other questions of global importance.

James Bialas demonstrates the use of a drone that records aerial images.

In the old days, say around the 1970s, a simpler pixel-by-pixel approach was used to decipher satellite imagery data; a single pixel in those low resolution images contained just one or two buildings. Since then, increasingly higher resolution has become the norm and a single building may now occupy several pixels in an image.

A new approach was needed. Enter GEOBIA– Geographic Object-Based Image Analysis— a processing framework of machine-learning computer algorithms that automate much of the process of translating all that data into a map useful for, say, identifying damage to urban areas following an earthquake.

In use since the 1990s, GEOBIA is an object-based, machine-learning method that results in more accurate classification of remotely sensed images. The method’s algorithms group adjacent pixels that share similar, user-defined characteristics, such as color or shape, in a process called segmentation. It’s similar to what our eyes (and brains) do to make sense of what we’re seeing when we look at a large image or scene.

In turn, these segmented groups of pixels are investigated by additional algorithms that determine if the group of pixels is, say, a damaged building or an undamaged stretch of pavement, in a process known as classification.

The refinement of GEOBIA methods have engaged geoscientists, data scientists, geographic information systems (GIS) professionals and others for several decades. Among them are Michigan Tech doctoral candidate James Bialas, along with his faculty advisors, Thomas Oommen(GMERS/DataS) and Timothy Havens (ECE/DataS). The interdisciplinary team’s successful research to improve the speed and accuracy of GEOBIA’s classification phase is the topic of the article “Optimal segmentation of high spatial resolution images for the classification of buildings using random forests” recently published in the International Journal of Applied Earth Observation and Geoinformation.

A classified scene.
A classified scene using a smaller segmentation level.

The team’s research started with aerial imagery of Christchurch, New Zealand, following the 2011 earthquake there.

“The specific question we looked at was, how do we translate the information we get from the crowd into labels that are coherent for an object-based image analysis?” Bialas said, adding that they specifically looked at the classification of city center buildings, which typically makes up about fifty percent of an image of any city center area.

After independently hand-classifying three sets of the same image data with which to verify their results (see images below), Bialas and his team started looking at how the image segmentation size affects the accuracy of the results.

A fully classified scene after the machine learning algorithm has been trained on all the classes the researchers used, and the remaining data has been classified.

“At an extremely small segmentation level, you’ll see individual things on building roofs, like HVAC equipment and other small features, and these will each become a separate image segment,” Bialas explained, but as the image segmentation parameter expands, it begins to encompass whole buildings or even whole city blocks.

“The big finding of this research is that, completely independent of the labeled data sets we used, our classification results stayed consistent across the different image segmentation levels,” Bialas said. “And more importantly, within a fairly large range of segmentation values, there was pretty much no impact on results. In the past several decades a lot of work has done trying to figure out this optimum segmentation level of exactly how big to make the image objects.”

“This research is important because as the GEOBIA problem becomes bigger and bigger—there are companies that are looking to image the entire planet earth per day—a massive amount of data is being collected,” Bialas noted, and in the case of natural disasters where response time is critical, for example, “there may not be enough time to calculate the most perfect segmentation level, and you’ll just have to pick a segmentation level and hope it works.”

This research is part of a larger project that is investigating how crowdsourcing can improve the outcome of geographic object-based image analysis, and also how GEOBIA methods can be used to improve the crowdsourced classification of any project, not just earthquake damage, such as massive oil spills and airplane crashes.

One vital use of of crowdsourced remotely sensed imagery is creating maps for first responders and disaster relief organizations. This faster, more accurate GEOBIA processing method can result in more timely disaster relief.

*Union of Concerned Scientists (UCS) Satellite Database

Illustrations of portions of the three different data sets used in the research.

ECE Department to Host Cyber-physical Security Workshop July 30-31

The Department of Electrical and Computer Engineering is pleased to announce a two-day workshop on cyber-physical security for power infrastructure and transportation to be held on campus July 30-31, 2019. Experts from industry and the academy will share information on current threats and countermeasures to protect power infrastructure and transportation systems.

Registration protocols will support 13 hours of continuing education for professional license holders.

More detailed information on the workshop can be found on the ECE blog.

The cost for Michigan Tech faculty and staff to attend is $100, and the cost for students is $25. Register for the workshop on the online store. To receive the discount, faculty and staff must use the promotional code MTUFAC, and students must use the code MTUSTU on the registration form checkout page.

Questions about the workshop can be directed to ECE at 7-2550 or ece@mtu.edu.

Havens Is Co-Chair of Fuzzy Systems Conference

Timothy HavensTimothy Havens (CC/ICC) was General Co-Chair of the 2019 IEEE International Conference on Fuzzy Systems in New Orleans, LA, June 23 to 26. At the conference, Havens presented his paper, “Machine Learning of Choquet Integral Regression with Respect to a Bounded Capacity (or Non-monotonic Fuzzy Measure),” and served on the panel, “Publishing in IEEE Transactions on Fuzzy Systems.”

Three additional papers authored by Havens were published in the conference’s proceedings: “Transfer Learning for the Choquet Integral,” “The Choquet Integral Neuron, Its PyTorch Implementation and Application to Decision Fusion,” and “Measuring Similarity Between Discontinuous Intervals – Challenges and Solutions.”