Timothy Havens, the William and Gloria Jackson Associate Professor of Computer Systems, has co-authored a paper recently published in The Journal of the Acoustical Society of America, Volume 50, Issue 1.
The paper is titled, “Recurrent networks for direction-of-arrival identification of an acoustic source in a shallow water channel using a vector sensor.” Havens’s co-authors are Steven Whitaker (EE graduate student), Andrew Barnard (ME-EM/GLRC), and George D, Anderson, US Naval Undersea Warfare Center (NUWC)-Newport.
The work described in the paper was funded by the United States Naval Undersea Warfare Center and Naval Engineering Education Consortium (NEEC) (Grant No. N00174-19-1-0004) and the Office of Naval Research (ONR) (Grant No. N00014-20-1-2793). This is Contribution No. 76 of the Great Lakes Research Center at Michigan Technological University.
Conventional direction-of-arrival (DOA) estimation algorithms for shallow water environments usually contain high amounts of error due to the presence of many acoustic reflective surfaces and scattering fields. Utilizing data from a single acoustic vector sensor, the magnitude and DOA of an acoustic signature can be estimated; as such, DOA algorithms are used to reduce the error in these estimations.
Three experiments were conducted using a moving boat as an acoustic target in a waterway in Houghton, Michigan. The shallow and narrow waterway is a complex and non-linear environment for DOA estimation. This paper compares minimizing DOA errors using conventional and machine learning algorithms. The conventional algorithm uses frequency-masking averaging, and the machine learning algorithms incorporate two recurrent neural network architectures, one shallow and one deep network.
Results show that the deep neural network models the shallow water environment better than the shallow neural network, and both networks are superior in performance to the frequency-masking average method.
by Allison Mills, University Marketing and Communications
A major challenge for fully autonomous vehicles is navigating bad weather. Snow especially confounds crucial sensor data that helps a vehicle gauge depth, find obstacles and keep on the correct side of the yellow line, assuming it is visible. Averaging more than 200 inches of snow every winter, Michigan’s Keweenaw Peninsula is the perfect place to push autonomous vehicle tech to its limits.
In two papers presented at SPIE Defense + Commercial Sensing 2021, researchers from Michigan Technological University discuss solutions for snowy driving scenarios that could help bring self-driving options to snowy cities like Chicago, Detroit, Minneapolis and Toronto.
The team includes Nathir Rawashdeh and doctoral student Abu-Alrub (CC) as well as Jeremy Bos and student researchers Akhil Kurup, Derek Chopp and Zach Jeffries (ECE).
Read more about their collaborative mobility research on mtu.edu/news.
Nathir Rawashdeh (AC) led the publication of a paper at the recent online SPIE Defense + Commercial Sensing / Autonomous Systems 2021 Conference.
The paper, entitled “Drivable path detection using CNN sensor fusion for autonomous driving in the snow,” targets the problem of drivable path detection in poor weather conditions including on snow-covered roads. The authors used artificial intelligence to perform camera, radar and LiDAR sensor fusion to detect a drivable path for a passenger car on snow-covered streets. A companion video is available.
Co-authors include Jeremy Bos (ECE).
Call for Manuscripts:
Special Issue on Fault Tolerance in Cloud/Edge/Fog Computing in Future Internet, an international peer-reviewed open access monthly journal published by MDPI.
April 20, 2021
June 10, 2021
- Dr. Ali Ebnenasir, Michigan Technological University
- Dr. Sandeep S. Kulkarni, Michigan State University
- Fault tolerance
- Cloud computing
- Edge computing
- Resource-constrained devices
- Distributed protocols
- State replication
Including, but not limited to:
- Faults and failures in cloud and edge computing.
- State replication on edge devices under the scarcity of resources.
- Fault tolerance mechanism on the edge and in the cloud.
- Models for the predication of service latency and costs in distributed fault-tolerant protocols on the edge and in the cloud.
- Fault-tolerant distributed protocols for resource management of edge devices.
- Fault-tolerant edge/cloud computing.
- Fault-tolerant computing on low-end devices.
- Load balancing (on the edge and in the cloud) in the presence of failures.
- Fault-tolerant data intensive applications on the edge and the cloud.
- Metrics and benchmarks for the evaluation of fault tolerance mechanisms in cloud/edge computing.
The Internet of Things (IoT) has brought a new era of computing that permeates in almost every aspect of our lives. Low-end IoT devices (e.g., smart sensors) are almost everywhere, monitoring and controlling the private and public infrastructure (e.g., home appliances, urban transportation, water management system) of our modern life. Low-end IoT devices communicate enormous amount of data to the cloud computing centers through intermediate devices, a.k.a. edge devices, that benefit from stronger computational resources (e.g., memory, processing power).
To enhance the throughput and resiliency of such a three-tier architecture (i.e., low-end devices, edge devices and the cloud), it is desirable to perform some tasks (e.g., storing shared objects) on edge devices instead of delegating everything to the cloud. Moreover, any sort of failure in this three-tier architecture would undermine the quality of service and the reliability of services provided to the end users.
Theoretical and experimental methods that incorporate fault tolerance in cloud and edge computing, which have the potential to improve the overall robustness of services in three-tier architectures.
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website (https://www.mdpi.com/user/login/). Once you are registered, click here to go to the submission form (https://susy.mdpi.com/user/manuscripts/upload/?journal=futureinternet).
Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page.
Please visit the Instructions for Authors page before submitting a manuscript.
The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English.
Authors may use MDPI’s English editing service prior to publication or during author revisions.
eLife, a prestigious journal in cell biology, has published a paper co-written by Sangyoon Han, “Pre-complexation of talin and vinculin without tension is required for efficient nascent adhesion maturation.”
eLife is a non-profit organization created by funders and led by researchers. Their mission is to accelerate discovery by operating a platform for research communication that encourages and recognizes the most responsible behaviors.
A paper co-authored by Sidike Paheding, Applied Computing, has been published in the journal, IEEE Access. “Trends in Deep Learning for Medical Hyperspectral Image Analysis,” was available for early access on March 24, 2021.
The paper discusses the implementation of deep learning for medical hyperspectral imaging.
Co-authors of the paper are Uzair Khan, Colin Elkin, and Vijay Devabhaktuni, all with the Department of Electrical and Computer Engineering, Purdue University Northwest.
Deep learning algorithms have seen acute growth of interest in their applications throughout several fields of interest in the last decade, with medical hyperspectral imaging being a particularly promising domain. So far, to the best of our knowledge, there is no review paper that discusses the implementation of deep learning for medical hyperspectral imaging, which is what this work aims to accomplish by examining publications that currently utilize deep learning to perform effective analysis of medical hyperspectral imagery.
This paper discusses deep learning concepts that are relevant and applicable to medical hyperspectral imaging analysis, several of which have been implemented since the boom in deep learning. This will comprise of reviewing the use of deep learning for classification, segmentation, and detection in order to investigate the analysis of medical hyperspectral imaging. Lastly, we discuss the current and future challenges pertaining to this discipline and the possible efforts to overcome such trials.
IEEE Access is a multidisciplinary, applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE’s fields of interest. Supported by article processing charges, its hallmarks are a rapid peer review and publication process with open access to all readers.
A scholarly paper co-authored by Assistant Professor Sidike Paheding, Applied Computing, has been published in the April 2021 issue of ISPRS Journal of Photogrammetry and Remote Sensing, published by Science Direct.
The title of the paper is, “Field-scale crop yield prediction using multi-temporal WorldView-3 and PlanetScope satellite data and deep learning.”
Paheding is a member of the Institute of Computer and Cybersystems’s (ICC) Center for Data Sciences.
The paper, “Optimal-Time Dynamic Planar Point Location in Connected Subdivisions,” describes an optimal-time solution for the dynamic point location problem and answers an open problem in computational geometry.
The data structure described in the paper supports queries and updates in logarithmic time. This result is optimal in some models of computation. Nekrich is the sole author of the publication.
The annual ACM Symposium on Theory of Computing (STOC), is the flagship
conference of SIGACT, the Special Interest Group on Algorithms and
Computation Theory, a special interest group of the Association for
Computing Machinery (ACM).
A scholarly paper co-authored by Assistant Professor Sidike Paheding, Applied Computing, is one of two papers to receive the 2020 Best Paper Award from the open-access journal Electronics, published by MDPI.
The paper presents a brief survey on the advances that have occurred in the area of Deep Learning.
Co-authors of the article, “A State-of-the-Art Survey on Deep Learning Theory and Architectures,” are Md Zahangir Alom, Tarek M. Taha, Chris Yakopcic, Stefan Westberg, Mst Shamima Nasrin, Mahmudul Hasan, Brian C. Van Essen, Abdul A. S. Awwal, and Vijayan K. Asari. The paper was published March 5, 2019, appearing in volume 8, issue 3, page 292, of the journal.
Papers were evaluated for originality and significance, citations, and downloads. The authors receive a monetary award , a certificate, and an opportunity to publish one paper free of charge before December 31, 2021, after the normal peer review procedure.
Electronics is an international peer-reviewed open access journal on the science of electronics and its applications. It is published online semimonthly by MDPI.
MDPI, a scholarly open access publishing venue founded in 1996, publishes 310 diverse, peer-reviewed, open access journals.
In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly and has been applied to most traditional application domains, as well as some new areas that present more opportunities. Different methods have been proposed based on different categories of learning, including supervised, semi-supervised, and un-supervised learning. Experimental results show state-of-the-art performance using deep learning when compared to traditional machine learning approaches in the fields of image processing, computer vision, speech recognition, machine translation, art, medical imaging, medical information processing, robotics and control, bioinformatics, natural language processing, cybersecurity, and many others.
This survey presents a brief survey on the advances that have occurred in the area of Deep Learning (DL), starting with the Deep Neural Network (DNN). The survey goes on to cover Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). Additionally, we have discussed recent developments, such as advanced variant DL techniques based on these DL approaches. This work considers most of the papers published after 2012 from when the history of deep learning began.
Furthermore, DL approaches that have been explored and evaluated in different application domains are also included in this survey. We also included recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys that have been published on DL using neural networks and a survey on Reinforcement Learning (RL). However, those papers have not discussed individual advanced techniques for training large-scale deep learning models and the recently developed method of generative models.