Tim Havens (CS/ICC) coauthored the article, “Enabling Explainable Fusion in Deep Learning with Fuzzy Integral Neural Networks,” which was accepted for publication in the journal IEEE Transactions on Fuzzy Systems.
Citation: M.A. Islam, D.T. Anderson, A. Pinar, T.C. Havens, G. Scott, and J.M. Keller. Enabling explainable fusion in deep learning with fuzzy integral neural networks. Accepted, IEEE Trans. Fuzzy Systems.
Abstract: Information fusion is an essential part of numerous engineering systems and biological functions, e.g., human cognition. Fusion occurs at many levels, ranging from the low-level combination of signals to the high-level aggregation of heterogeneous decision-making processes. While the last decade has witnessed an explosion of research in deep learning, fusion in neural networks has not observed the same revolution. Specifically, most neural fusion approaches are ad hoc, are not understood, are distributed versus localized, and/or explainability is low (if present at all). Herein, we prove that the fuzzy Choquet integral (ChI), a powerful nonlinear aggregation function, can be represented as a multi-layer network, referred to hereafter as ChIMP. We also put forth an improved ChIMP (iChIMP) that leads to a stochastic gradient descent-based optimization in light of the exponential number of ChI inequality constraints. An additional benefit of ChIMP/iChIMP is that it enables eXplainable AI (XAI). Synthetic validation experiments are provided and iChIMP is applied to the fusion of a set of heterogeneous architecture deep models in remote sensing. We show an improvement in model accuracy and our previously established XAI indices shed light on the quality of our data, model, and its decisions.