Category: DataS

Oommen Part of Team in Mumbai Working on Disaster Management Curriculum

Thomas Oommen

Thomas Oommen (DataS), associate professor of geological and mining engineering and sciences, was recently featured in a Michigan Tech Unscripted Research Blog titled, ” Geohazards on the Horizon.”

Oommen was part of a US team in Mumbai this August working on disaster management curriculum with the Tata Institute of Social Sciences (TISS), the only institution in all of Mumbai—one of the world’s largest cities with 19 million people—to offer a degree in disaster management.

Oommen’s trip was funded by the US Consulate General in Mumbai. Read more about the team’s work on the Unscripted blog here: https://www.mtu.edu/unscripted/stories/2019/august/geohazards-on-the-horizon.html

Mari Buche Is Co-author of Article in ACM SIGMIS Database

Mari Buche

Mari Buche (DataS), School of Business and Economics associate dean and professor of management information systems, is co-author of the article, “He Said, She Said: Communication Theory of Identity and the Challenges Men Face in the Information Systems Workplace,” which was published in the August 2019 issue of the newsletter ACM SIGMIS Database: the DATABASE for Advances in Information Systems.

Co-authors of the article are Cynthia K. Riemenschneider, Baylor University, and Deb Armstrong, Florida State University.

Abstract: The preponderance of the academic research focused on diversity in the IS field has emphasized the perspectives of women and racioethnic minorities. Recent research has found that following the appointment of a female CEO, white male top managers provided less help to colleagues, particularly those identified as minority-status (McDonald, Keeves, & Westphal, 2018). Additionally, Collison and Hearn (1994) assert that white men’s universal status and their occupancy of the normative standard state have rendered them invisible as objects of analysis. To develop a more holistic view of the IS workplace, we expand the academic exploration by looking at the challenges men face in the Information Systems (IS) workplace. Using a cognitive lens, we evoke the challenges men perceive they face at work and cast them into revealed causal maps. We then repeat the process evoking women’s perspectives of men’s challenges. The findings are analyzed using the Communication Theory of Identity (CTI) to determine the areas of overlap and identity gaps. This study advances our understanding of the cognitive overlap (and lack thereof) regarding the challenges facing men in the IS field, and provides another step toward developing a more inclusive IS work environment.

Citation:
ACM SIGMIS Database: the DATABASE for Advances in Information Systems
Volume 50 Issue 3, August 2019
Pages 85-115
ACM New York, NY, USA

https://dl.acm.org/citation.cfm?id=3353407

DOI: 10.1145/3353401.3353407

Susanta Ghosh is PI on $170K NSF Grant

Susanta Ghosh

Susanta Ghosh (ICC-DataS/MEEM/MuSTI) is Principal Investigator on a project that has received a $170,604 research and development grant from the National Science Foundation. The project is titled “EAGER: An Atomistic-Continuum Formulation for the Mechanics of Monolayer Transition Metal Dichalcogenides.” This is a potential 19-month project.

Dr. Ghosh is an assistant professor of Mechanical Engineering-Engineering Mechanics at Michigan Tech. Before joining the Michigan Tech College pof Engineering, Dr. Ghosh was an associate in research in the Pratt School of Engineering at Duke University; a postdoctoral scholar in the departments of Aerospace Engineering and Materials Science & Engineering at the University of Michigan, Ann Arbor; and a research fellow at the Technical University of Catalunya, Barcelona, Spain. His M.S. and Ph.D. degrees are from the Indian Institute of Science (IISc), Bangalore. His research interests include multi-scale solid mechanics, atomistic modeling, ultrasound elastography, and inverse problem and computational science.

Abstract: Two-dimensional materials are made of chemical elements or compounds of elements while maintaining a single atomic layer crystalline structure. Two-dimensional materials, especially Transition Metal Dichalcogenides (TMDs), have shown tremendous promise to be transformed into advanced material systems and devices, e.g., field-effect transistors, solar cells, photodetectors, fuel cells, sensors, and transparent flexible displays. To achieve broader use of TMDs across cutting-edge applications, complex deformations for large-area TMDs must be better understood. Large-area TMDs can be simulated and analyzed through predictive modeling, a capability that is currently lacking. This EArly-concept Grant for Exploratory Research (EAGER) award supports fundamental research that overcomes current challenges in large-scale atomistic modeling to obtain an efficient but reliable continuum model for single-layer TMDs containing billions of atoms. The model will be translational and will contribute towards the development of a wide range of applications in the nanotechnology, electronics, and alternative energy industries. The award will further support development of an advanced graduate-level course on multiscale modeling and organization of symposia in two international conferences on mechanics of two-dimensional materials. Experimental samples of TMDs contain billions of atoms and hence are inaccessible to the state-of-the-art molecular dynamics simulations. Moreover, existing crystal elastic models for surfaces cannot be applied to multi-atom thick 2D TMDs due to the presence of interatomic bonds across the atomic surfaces. The crystal elastic model aims to solve this problem by projecting all interatomic bonds onto the mid-surface to track their deformations. The actual deformed bonds will, therefore, be computed using the deformations of the mid-surface. Additionally, a technique will be derived to incorporate the effects of curvature and stretching of TMDs on their interactions with substrates. The model will be exercised to generate insights into the mechanical instabilities and the role of substrate interactions on them. The coarse-grained model will overcome the computational bottleneck of molecular dynamics models to simulate TMDs samples comprising billions of atoms. This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.

Improving Reliability of In-Memory Storage

Electronic circuit board

Researcher: Jianhui Yue, PI, Assistant Professor, Computer Science

Sponsor: National Science Foundation, SHF: Small: Collaborative Research

Amount of Support: $192, 716

Duration of Support: 3 years

Abstract: Emerging nonvolatile memory (NVM) technologies, such as PCM, STT-RAM, and memristors, provide not only byte-addressability, low-latency reads and writes comparable to DRAM, but also persistent writes and potentially large storage capacity like an SSD. These advantages make NVM likely to be next-generation fast persistent storage for massive data, referred to as in-memory storage. Yet, NVM-based storage has two challenges: (1) Memory cells have limited write endurance (i.e., the total number of program/erase cycles per cell); (2) NVM has to remain in a consistent state in the event of a system crash or power loss. The goal of this project is to develop an efficient in-memory storage framework that addresses these two challenges. This project will take a holistic approach, spanning from low-level architecture design to high-level OS management, to optimize the reliability, performance, and manageability of in-memory storage. The technical approach will involve understanding the implication and impact of the write endurance issue when cutting-edge NVM is adopted into storage systems. The improved understanding will motivate and aid the design of cost-effective methods to improve the life-time of in-memory storage and to achieve efficient and reliable consistence maintenance.

Publications:

Pai Chen, Jianhui Yue, Xiaofei Liao, Hai Jin. “Optimizing DRAM Cache by a Trade-off between Hit Rate and Hit Latency,” IEEE Transactions on Emerging Topics in Computing, 2018. doi:10.1109/TETC.2018.2800721

Chenlei Tang, Jiguang Wan, Yifeng Zhu, Zhiyuan Liu, Peng Xu, Fei Wu and Changsheng Xie. “RAFS: A RAID-Aware File System to Reduce Parity Update Overhead for SSD RAID,” Design Automation Test In Europe Conference (DATE) 2019, 2019.

Pai Chen, Jianhui Yue, Xiaofei Liao, Hai Jin. “Trade-off between Hit Rate and Hit Latency for Optimizing DRAM Cache,” IEEE Transactions on Emerging Topics in Computing, 2018.

More details

Remotely Sensed Image Classification Refined by Michigan Tech Researchers

Thomas Oommen (left) and James Bialas

By Karen S. Johnson

View the press release.

With close to 2,000 working satellites currently orbiting the Earth, and about a third of them engaged in observing and imaging o

ur planet,* the sheer volume of remote sensing imagery being collected and transmitted to the surface is astounding. Add to this images collected by drones, and the estimation grows quite possibly beyond the imagination.

How on earth are science and industry making sense of it all? All of this remote sensing imagery needs to be converted into tangible information so it can be utilized by government and industry to respond to disasters and address other questions of global importance.

James Bialas demonstrates the use of a drone that records aerial images.

In the old days, say around the 1970s, a simpler pixel-by-pixel approach was used to decipher satellite imagery data; a single pixel in those low resolution images contained just one or two buildings. Since then, increasingly higher resolution has become the norm and a single building may now occupy several pixels in an image.

A new approach was needed. Enter GEOBIA– Geographic Object-Based Image Analysis— a processing framework of machine-learning computer algorithms that automate much of the process of translating all that data into a map useful for, say, identifying damage to urban areas following an earthquake.

In use since the 1990s, GEOBIA is an object-based, machine-learning method that results in more accurate classification of remotely sensed images. The method’s algorithms group adjacent pixels that share similar, user-defined characteristics, such as color or shape, in a process called segmentation. It’s similar to what our eyes (and brains) do to make sense of what we’re seeing when we look at a large image or scene.

In turn, these segmented groups of pixels are investigated by additional algorithms that determine if the group of pixels is, say, a damaged building or an undamaged stretch of pavement, in a process known as classification.

The refinement of GEOBIA methods have engaged geoscientists, data scientists, geographic information systems (GIS) professionals and others for several decades. Among them are Michigan Tech doctoral candidate James Bialas, along with his faculty advisors, Thomas Oommen(GMERS/DataS) and Timothy Havens (ECE/DataS). The interdisciplinary team’s successful research to improve the speed and accuracy of GEOBIA’s classification phase is the topic of the article “Optimal segmentation of high spatial resolution images for the classification of buildings using random forests” recently published in the International Journal of Applied Earth Observation and Geoinformation.

A classified scene.
A classified scene using a smaller segmentation level.

The team’s research started with aerial imagery of Christchurch, New Zealand, following the 2011 earthquake there.

“The specific question we looked at was, how do we translate the information we get from the crowd into labels that are coherent for an object-based image analysis?” Bialas said, adding that they specifically looked at the classification of city center buildings, which typically makes up about fifty percent of an image of any city center area.

After independently hand-classifying three sets of the same image data with which to verify their results (see images below), Bialas and his team started looking at how the image segmentation size affects the accuracy of the results.

A fully classified scene after the machine learning algorithm has been trained on all the classes the researchers used, and the remaining data has been classified.

“At an extremely small segmentation level, you’ll see individual things on building roofs, like HVAC equipment and other small features, and these will each become a separate image segment,” Bialas explained, but as the image segmentation parameter expands, it begins to encompass whole buildings or even whole city blocks.

“The big finding of this research is that, completely independent of the labeled data sets we used, our classification results stayed consistent across the different image segmentation levels,” Bialas said. “And more importantly, within a fairly large range of segmentation values, there was pretty much no impact on results. In the past several decades a lot of work has done trying to figure out this optimum segmentation level of exactly how big to make the image objects.”

“This research is important because as the GEOBIA problem becomes bigger and bigger—there are companies that are looking to image the entire planet earth per day—a massive amount of data is being collected,” Bialas noted, and in the case of natural disasters where response time is critical, for example, “there may not be enough time to calculate the most perfect segmentation level, and you’ll just have to pick a segmentation level and hope it works.”

This research is part of a larger project that is investigating how crowdsourcing can improve the outcome of geographic object-based image analysis, and also how GEOBIA methods can be used to improve the crowdsourced classification of any project, not just earthquake damage, such as massive oil spills and airplane crashes.

One vital use of of crowdsourced remotely sensed imagery is creating maps for first responders and disaster relief organizations. This faster, more accurate GEOBIA processing method can result in more timely disaster relief.

*Union of Concerned Scientists (UCS) Satellite Database

Illustrations of portions of the three different data sets used in the research.

Havens Is Co-Chair of Fuzzy Systems Conference

Timothy HavensTimothy Havens (CC/ICC) was General Co-Chair of the 2019 IEEE International Conference on Fuzzy Systems in New Orleans, LA, June 23 to 26. At the conference, Havens presented his paper, “Machine Learning of Choquet Integral Regression with Respect to a Bounded Capacity (or Non-monotonic Fuzzy Measure),” and served on the panel, “Publishing in IEEE Transactions on Fuzzy Systems.”

Three additional papers authored by Havens were published in the conference’s proceedings: “Transfer Learning for the Choquet Integral,” “The Choquet Integral Neuron, Its PyTorch Implementation and Application to Decision Fusion,” and “Measuring Similarity Between Discontinuous Intervals – Challenges and Solutions.”

Tim Havens Presents Talk at Technological University of Eindhoven

Timothy HavensICC Director Tim Havens (DataS) presented an invited talk, “Explainable Deep Fusion,” at the Technological University of Eindhoven, The Netherlands, on May 7, 2019.

Like a winning trivia team, sensor fusion systems seek to combine cooperative and complementary sources to achieve an optimal inference from pooled evidence. In his talk, Havens introduced data-, feature-, and decision-level fusions and discussed in detail two innovations he has made in his research: non-linear aggregation learning with Choquet integrals and their applications in deep learning and Explainable AI (XAI).