Timothy Havens (CC/ICC), the William and Gloria Jackson Associate Professor of Computer Systems and director of the Institute of Computing and Cybersystems (ICC), was quoted extensively in the article “How to make a career switch into AI: 8 tips,” which was published September 5, 2019, on The Enterprisers Project blog.
George Anderson and Sally Sutherland of the US Naval Undersea Warfare Center (NUWC)-Newport will present talks on Tuesday, September 17, 2019, from 3:00 to 4:00 pm, in Room 202 of the Michigan Tech Great Lakes Research Center. A reception will follow and refreshments will be served.
George Anderson will present his talk from 3:00 – 3:30 pm. Titled “Classification of Personnel and Vehicle Activity Using a Sensor System With Numerous Array Elements,” Anderson’s talk will present the performance of a hybrid discriminative/generative classifier using experimental data collected from a scripted field test.
Sally Sutherland, NEEC Director, NAVSEA Headquarters, whose talk is 3:30-4:00 pm, will present, “An Overview of the Naval Engineering Education Consortium (NEEC) Program,” in which she will share information about the Navy’s Naval Engineering Education Consortium (NEEC) program, whose mission is to educate and develop world-class naval engineers and scientists to become part of the Navy’s civilian science and engineering workforce.
Research conducted by Michigan Tech doctoral candidate James Bialas and faculty members Thomas Oommen (DataS/GMES/CEE) and Timothy Havens (DataS/CS) made news in the Michigan Ag Connection, August 7, 2019. The item is a re-posting of the Michigan Tech Unscripted article, “Found in Translation, which was posted on the Michigan Tech website July 12, 2019.
By Karen S. Johnson
With close to 2,000 working satellites currently orbiting the Earth, and about a third of them engaged in observing and imaging o
ur planet,* the sheer volume of remote sensing imagery being collected and transmitted to the surface is astounding. Add to this images collected by drones, and the estimation grows quite possibly beyond the imagination.
How on earth are science and industry making sense of it all? All of this remote sensing imagery needs to be converted into tangible information so it can be utilized by government and industry to respond to disasters and address other questions of global importance.
In the old days, say around the 1970s, a simpler pixel-by-pixel approach was used to decipher satellite imagery data; a single pixel in those low resolution images contained just one or two buildings. Since then, increasingly higher resolution has become the norm and a single building may now occupy several pixels in an image.
A new approach was needed. Enter GEOBIA– Geographic Object-Based Image Analysis— a processing framework of machine-learning computer algorithms that automate much of the process of translating all that data into a map useful for, say, identifying damage to urban areas following an earthquake.
In use since the 1990s, GEOBIA is an object-based, machine-learning method that results in more accurate classification of remotely sensed images. The method’s algorithms group adjacent pixels that share similar, user-defined characteristics, such as color or shape, in a process called segmentation. It’s similar to what our eyes (and brains) do to make sense of what we’re seeing when we look at a large image or scene.
In turn, these segmented groups of pixels are investigated by additional algorithms that determine if the group of pixels is, say, a damaged building or an undamaged stretch of pavement, in a process known as classification.
The refinement of GEOBIA methods have engaged geoscientists, data scientists, geographic information systems (GIS) professionals and others for several decades. Among them are Michigan Tech doctoral candidate James Bialas, along with his faculty advisors, Thomas Oommen(GMERS/DataS) and Timothy Havens (ECE/DataS). The interdisciplinary team’s successful research to improve the speed and accuracy of GEOBIA’s classification phase is the topic of the article “Optimal segmentation of high spatial resolution images for the classification of buildings using random forests” recently published in the International Journal of Applied Earth Observation and Geoinformation.
The team’s research started with aerial imagery of Christchurch, New Zealand, following the 2011 earthquake there.
“The specific question we looked at was, how do we translate the information we get from the crowd into labels that are coherent for an object-based image analysis?” Bialas said, adding that they specifically looked at the classification of city center buildings, which typically makes up about fifty percent of an image of any city center area.
After independently hand-classifying three sets of the same image data with which to verify their results (see images below), Bialas and his team started looking at how the image segmentation size affects the accuracy of the results.
“At an extremely small segmentation level, you’ll see individual things on building roofs, like HVAC equipment and other small features, and these will each become a separate image segment,” Bialas explained, but as the image segmentation parameter expands, it begins to encompass whole buildings or even whole city blocks.
“The big finding of this research is that, completely independent of the labeled data sets we used, our classification results stayed consistent across the different image segmentation levels,” Bialas said. “And more importantly, within a fairly large range of segmentation values, there was pretty much no impact on results. In the past several decades a lot of work has done trying to figure out this optimum segmentation level of exactly how big to make the image objects.”
“This research is important because as the GEOBIA problem becomes bigger and bigger—there are companies that are looking to image the entire planet earth per day—a massive amount of data is being collected,” Bialas noted, and in the case of natural disasters where response time is critical, for example, “there may not be enough time to calculate the most perfect segmentation level, and you’ll just have to pick a segmentation level and hope it works.”
This research is part of a larger project that is investigating how crowdsourcing can improve the outcome of geographic object-based image analysis, and also how GEOBIA methods can be used to improve the crowdsourced classification of any project, not just earthquake damage, such as massive oil spills and airplane crashes.
One vital use of of crowdsourced remotely sensed imagery is creating maps for first responders and disaster relief organizations. This faster, more accurate GEOBIA processing method can result in more timely disaster relief.
*Union of Concerned Scientists (UCS) Satellite Database
Timothy Havens (CC/ICC) was General Co-Chair of the 2019 IEEE International Conference on Fuzzy Systems in New Orleans, LA, June 23 to 26. At the conference, Havens presented his paper, “Machine Learning of Choquet Integral Regression with Respect to a Bounded Capacity (or Non-monotonic Fuzzy Measure),” and served on the panel, “Publishing in IEEE Transactions on Fuzzy Systems.”
Three additional papers authored by Havens were published in the conference’s proceedings: “Transfer Learning for the Choquet Integral,” “The Choquet Integral Neuron, Its PyTorch Implementation and Application to Decision Fusion,” and “Measuring Similarity Between Discontinuous Intervals – Challenges and Solutions.”
ICC Director Tim Havens (DataS) presented an invited talk, “Explainable Deep Fusion,” at the Technological University of Eindhoven, The Netherlands, on May 7, 2019.
Like a winning trivia team, sensor fusion systems seek to combine cooperative and complementary sources to achieve an optimal inference from pooled evidence. In his talk, Havens introduced data-, feature-, and decision-level fusions and discussed in detail two innovations he has made in his research: non-linear aggregation learning with Choquet integrals and their applications in deep learning and Explainable AI (XAI).
Tim Havens (CS/ICC) coauthored the article, “Enabling Explainable Fusion in Deep Learning with Fuzzy Integral Neural Networks,” which was accepted for publication in the journal IEEE Transactions on Fuzzy Systems.
Citation: M.A. Islam, D.T. Anderson, A. Pinar, T.C. Havens, G. Scott, and J.M. Keller. Enabling explainable fusion in deep learning with fuzzy integral neural networks. Accepted, IEEE Trans. Fuzzy Systems.
Abstract: Information fusion is an essential part of numerous engineering systems and biological functions, e.g., human cognition. Fusion occurs at many levels, ranging from the low-level combination of signals to the high-level aggregation of heterogeneous decision-making processes. While the last decade has witnessed an explosion of research in deep learning, fusion in neural networks has not observed the same revolution. Specifically, most neural fusion approaches are ad hoc, are not understood, are distributed versus localized, and/or explainability is low (if present at all). Herein, we prove that the fuzzy Choquet integral (ChI), a powerful nonlinear aggregation function, can be represented as a multi-layer network, referred to hereafter as ChIMP. We also put forth an improved ChIMP (iChIMP) that leads to a stochastic gradient descent-based optimization in light of the exponential number of ChI inequality constraints. An additional benefit of ChIMP/iChIMP is that it enables eXplainable AI (XAI). Synthetic validation experiments are provided and iChIMP is applied to the fusion of a set of heterogeneous architecture deep models in remote sensing. We show an improvement in model accuracy and our previously established XAI indices shed light on the quality of our data, model, and its decisions.
Timothy Havens (ECE) is the principal investigator on a research and development project that has received $96,643 from the Naval Surface Warfare Center. Andrew Barnard (ME-EM) is the Co-PI on the project, which is titled, “Localization, Tracking, and Classification of On-Ice and Underwater Noise Sources Using Machine Learning.”
This is the first year of a potential three-year project totaling $299,533.
Tech Today, March 7, 2019
Timothy Havens (DataS) and Timothy Schulz (DataS) were recently awarded a $15,000 contract from MIT Lincoln Laboratory to investigate signal processing for active phased array systems with simultaneous transmit and receive capability. While this capability offers increased performance in communications, radar, and electronic warfare applications, the challenging aspect is that a high-level of isolation must be achieved between the transmit and receive antennas in order to mitigate self-interference in the array. This project spearheads a collaboration with Dr. Jon Doane (BS and MS from MTU) in MIT Lincoln Laboratory’s RF Technology Group. Ian Cummings, an NSF Graduate Research Fellow who is co-advised by Havens and Schulz, is undertaking this research for his PhD dissertation and will spend the summers at MIT Lincoln Laboratory as part of the project.