Category Archives: Havens


Tim Havens Quoted in The Enterprise Project Blog

Timothy Havens

Timothy Havens (CC/ICC), the William and Gloria Jackson Associate Professor of Computer Systems and director of the Institute of Computing and Cybersystems (ICC), was quoted extensively in the article “How to make a career switch into AI: 8 tips,” which was published September 5, 2019, on The Enterprisers Project blog.

https://enterprisersproject.com/article/2019/9/ai-career-path-how-make-switch


Michigan Ag News Headlines: Found in Translation at Michigan Tech

James Bialas does an aerial drone demonstration for students attending the Surveying Summer Youth Program exploration at Michigan Technological University. Drones are one tool in the remote sensing arsenal. Image Credit: Peter Zhu

Research conducted by Michigan Tech doctoral candidate James Bialas and faculty members Thomas Oommen (DataS/GMES/CEE) and Timothy Havens (DataS/CS) made news in the Michigan Ag Connection, August 7, 2019. The item is a re-posting of the Michigan Tech Unscripted article, “Found in Translation, which was posted on the Michigan Tech website July 12, 2019.

http://michiganagconnection.com/story-state.php?Id=856&yr=2019

https://www.mtu.edu/news/stories/2019/july/found-in-translation.html


Remotely Sensed Image Classification Refined by Michigan Tech Researchers

Thomas Oommen (left) and James Bialas

By Karen S. Johnson

View the press release.

With close to 2,000 working satellites currently orbiting the Earth, and about a third of them engaged in observing and imaging o

ur planet,* the sheer volume of remote sensing imagery being collected and transmitted to the surface is astounding. Add to this images collected by drones, and the estimation grows quite possibly beyond the imagination.

How on earth are science and industry making sense of it all? All of this remote sensing imagery needs to be converted into tangible information so it can be utilized by government and industry to respond to disasters and address other questions of global importance.

James Bialas demonstrates the use of a drone that records aerial images.

In the old days, say around the 1970s, a simpler pixel-by-pixel approach was used to decipher satellite imagery data; a single pixel in those low resolution images contained just one or two buildings. Since then, increasingly higher resolution has become the norm and a single building may now occupy several pixels in an image.

A new approach was needed. Enter GEOBIA– Geographic Object-Based Image Analysis— a processing framework of machine-learning computer algorithms that automate much of the process of translating all that data into a map useful for, say, identifying damage to urban areas following an earthquake.

In use since the 1990s, GEOBIA is an object-based, machine-learning method that results in more accurate classification of remotely sensed images. The method’s algorithms group adjacent pixels that share similar, user-defined characteristics, such as color or shape, in a process called segmentation. It’s similar to what our eyes (and brains) do to make sense of what we’re seeing when we look at a large image or scene.

In turn, these segmented groups of pixels are investigated by additional algorithms that determine if the group of pixels is, say, a damaged building or an undamaged stretch of pavement, in a process known as classification.

The refinement of GEOBIA methods have engaged geoscientists, data scientists, geographic information systems (GIS) professionals and others for several decades. Among them are Michigan Tech doctoral candidate James Bialas, along with his faculty advisors, Thomas Oommen(GMERS/DataS) and Timothy Havens (ECE/DataS). The interdisciplinary team’s successful research to improve the speed and accuracy of GEOBIA’s classification phase is the topic of the article “Optimal segmentation of high spatial resolution images for the classification of buildings using random forests” recently published in the International Journal of Applied Earth Observation and Geoinformation.

A classified scene.
A classified scene using a smaller segmentation level.

The team’s research started with aerial imagery of Christchurch, New Zealand, following the 2011 earthquake there.

“The specific question we looked at was, how do we translate the information we get from the crowd into labels that are coherent for an object-based image analysis?” Bialas said, adding that they specifically looked at the classification of city center buildings, which typically makes up about fifty percent of an image of any city center area.

After independently hand-classifying three sets of the same image data with which to verify their results (see images below), Bialas and his team started looking at how the image segmentation size affects the accuracy of the results.

A fully classified scene after the machine learning algorithm has been trained on all the classes the researchers used, and the remaining data has been classified.

“At an extremely small segmentation level, you’ll see individual things on building roofs, like HVAC equipment and other small features, and these will each become a separate image segment,” Bialas explained, but as the image segmentation parameter expands, it begins to encompass whole buildings or even whole city blocks.

“The big finding of this research is that, completely independent of the labeled data sets we used, our classification results stayed consistent across the different image segmentation levels,” Bialas said. “And more importantly, within a fairly large range of segmentation values, there was pretty much no impact on results. In the past several decades a lot of work has done trying to figure out this optimum segmentation level of exactly how big to make the image objects.”

“This research is important because as the GEOBIA problem becomes bigger and bigger—there are companies that are looking to image the entire planet earth per day—a massive amount of data is being collected,” Bialas noted, and in the case of natural disasters where response time is critical, for example, “there may not be enough time to calculate the most perfect segmentation level, and you’ll just have to pick a segmentation level and hope it works.”

This research is part of a larger project that is investigating how crowdsourcing can improve the outcome of geographic object-based image analysis, and also how GEOBIA methods can be used to improve the crowdsourced classification of any project, not just earthquake damage, such as massive oil spills and airplane crashes.

One vital use of of crowdsourced remotely sensed imagery is creating maps for first responders and disaster relief organizations. This faster, more accurate GEOBIA processing method can result in more timely disaster relief.

*Union of Concerned Scientists (UCS) Satellite Database

Illustrations of portions of the three different data sets used in the research.


Havens Is Co-Chair of Fuzzy Systems Conference

Timothy HavensTimothy Havens (CC/ICC) was General Co-Chair of the 2019 IEEE International Conference on Fuzzy Systems in New Orleans, LA, June 23 to 26. At the conference, Havens presented his paper, “Machine Learning of Choquet Integral Regression with Respect to a Bounded Capacity (or Non-monotonic Fuzzy Measure),” and served on the panel, “Publishing in IEEE Transactions on Fuzzy Systems.”

Three additional papers authored by Havens were published in the conference’s proceedings: “Transfer Learning for the Choquet Integral,” “The Choquet Integral Neuron, Its PyTorch Implementation and Application to Decision Fusion,” and “Measuring Similarity Between Discontinuous Intervals – Challenges and Solutions.”


Tim Havens Presents Talk at Technological University of Eindhoven

Timothy HavensICC Director Tim Havens (DataS) presented an invited talk, “Explainable Deep Fusion,” at the Technological University of Eindhoven, The Netherlands, on May 7, 2019.

Like a winning trivia team, sensor fusion systems seek to combine cooperative and complementary sources to achieve an optimal inference from pooled evidence. In his talk, Havens introduced data-, feature-, and decision-level fusions and discussed in detail two innovations he has made in his research: non-linear aggregation learning with Choquet integrals and their applications in deep learning and Explainable AI (XAI).


Tim Havens Is Co-author of Article Published in IEEE Transactions on Fuzzy Systems

Timothy HavensTim Havens (CS/ICC) coauthored the article, “Enabling Explainable Fusion in Deep Learning with Fuzzy Integral Neural Networks,” which was accepted for publication in the journal IEEE Transactions on Fuzzy Systems.

Citation: M.A. Islam, D.T. Anderson, A. Pinar, T.C. Havens, G. Scott, and J.M. Keller. Enabling explainable fusion in deep learning with fuzzy integral neural networks. Accepted, IEEE Trans. Fuzzy Systems.

Abstract: Information fusion is an essential part of numerous engineering systems and biological functions, e.g., human cognition. Fusion occurs at many levels, ranging from the low-level combination of signals to the high-level aggregation of heterogeneous decision-making processes. While the last decade has witnessed an explosion of research in deep learning, fusion in neural networks has not observed the same revolution. Specifically, most neural fusion approaches are ad hoc, are not understood, are distributed versus localized, and/or explainability is low (if present at all). Herein, we prove that the fuzzy Choquet integral (ChI), a powerful nonlinear aggregation function, can be represented as a multi-layer network, referred to hereafter as ChIMP. We also put forth an improved ChIMP (iChIMP) that leads to a stochastic gradient descent-based optimization in light of the exponential number of ChI inequality constraints. An additional benefit of ChIMP/iChIMP is that it enables eXplainable AI (XAI). Synthetic validation experiments are provided and iChIMP is applied to the fusion of a set of heterogeneous architecture deep models in remote sensing. We show an improvement in model accuracy and our previously established XAI indices shed light on the quality of our data, model, and its decisions.


Havens is PI on Naval Surface Warfare Center Project

Timothy Havens
Tim Havens

Timothy Havens (ECE) is the principal investigator on a research and development project that has received $96,643 from the Naval Surface Warfare Center. Andrew Barnard (ME-EM) is the Co-PI on the project, which is titled, “Localization, Tracking, and Classification of On-Ice and Underwater Noise Sources Using Machine Learning.”

This is the first year of a potential three-year project totaling $299,533.

Tech Today, March 7, 2019


ICC Members Secure Contract from MIT Lincoln Laboratory

Tim Havens
Timothy Schulz
Tim Schulz

Timothy Havens (DataS) and Timothy Schulz (DataS) were recently awarded a $15,000 contract from MIT Lincoln Laboratory to investigate signal processing for active phased array systems with simultaneous transmit and receive capability. While this capability offers increased performance in communications, radar, and electronic warfare applications, the challenging aspect is that a high-level of isolation must be achieved between the transmit and receive antennas in order to mitigate self-interference in the array. This project spearheads a collaboration with Dr. Jon Doane (BS and MS from MTU) in MIT Lincoln Laboratory’s RF Technology Group. Ian Cummings, an NSF Graduate Research Fellow who is co-advised by Havens and Schulz, is undertaking this research for his PhD dissertation and will spend the summers at MIT Lincoln Laboratory as part of the project.