The College of Computing’s Department of Applied Computing invites the campus community to lecture by MERET faculty candidate Dr. Sidike Paheding, Friday, April 10, 2020, at 3:30 p.m., via an online Zoom meeting. The title of Paheding’s lecture is, “Machine Learning in Multiscale and Multimodal Remote Sensing: From Ground to UAV with a stop at Satellite through Different Sensors.”
Paheding is currently a visiting assistant professor in the ECE department at Purdue University Northwest. His research interests cover a variety of topics in image/video processing, machine learning, deep learning, computer vision, and remote sensing.
Abstract: Remote sensing data provide timely, non-destructive, instantaneous estimates of the earth’s surface over a large area, and has been accepted as a valuable tool for agriculture, weather, forestry, defense, biodiversity, etc. In recent years, machine learning for remote sensing has gained significant momentum due to advances in algorithm development, computing power, sensor systems, and data availability.
In his talk, Paheding will discuss the potential applications of machine learning in remote sensing from the aspects of different scales and modalities. Research topics such as multimodal data fusion and machine learning for yield prediction, plant phenotyping, augmented reality and heterogeneous agricultural landscape mapping will be covered.
Paheding earned his M.S. and Ph.D. degrees in electrical engineering at the University of South Alabama, Mobile, and University of Dayton, Ohio, respectively. He was a postdoctoral research associate and and assistant research professor in the Remote Sensing Lab at Saint Louis University from 2017 to 2019, prior to joining Purdue University Northwest.
He has advised students at the undergraduate, master’s, and doctoral levels, and authored or co-authored close to 100 research articles, including in several top peer-review journal papers.
He is an associate editor of the Springer journal Signal, Image, and Video Processing, a guest editor/reviewer for a number of reputed journals, and he has served on international conference committees. He is an invited member of Tau Beta Pi (Engineering Honor Society).
The College of Computing Department of Applied Computing invites the campus community to a lecture by faculty candidate Saleem Ashraf on, April 7, 2020, at p.m., via an online Zoom meeting.
Dr. Ashraf is currently an assistant professor of mechatronics engineering in the ECE department at Sultan Qaboos University, Oman. He received his Ph.D. and MSc. degrees in mechatronics engineering from DeMontfort University, UK, in 2006 and 2003, respectively, and his BSc. in electrical and computer engineering from Philadelphia University, Pa., in 2000.
Ashraf’s research interests are unified under the theme, “developing real-time smart controllers for different engineering systems,” and his research investigates electromechanical, electro-pneumatic, and piezoelectric based systems.
Advancements in field of unmanned vehicle system, artificial intelligence, and computer vision have empowered the integration of solutions that would potentially automate many processes.
Ashraf’s seminar presents his research experience in the field of smart and vision-based unmanned vehicle systems, and how this technology has been employed to solve real-life problems in Oman.
The talk will present a selection of Ashraf’s fundamental research work focused on the modeling and control of long-stroke piezoelectric actuators, which are being used widely in micro positioning systems. He will also share his experience in the establishment of the “Embedded & Interconnected Vision Systems” (EIVS) lab.
The second part of Ashraf’s talk will cover his teaching experience, including philosophy, courses, new courses, extracurricular activities, and practical projects. He will present his methodology in supervising multi-disciplinary final year projects with some examples of completed projects. Finally, Ashraf will discuss his ideas about how he can contribute to the Michigan Tech curriculum at all levels, undergraduate and graduate.
Ashraf has been awarded external research grants totaling more than $450K, and three internal grants totaling $58K; he attributes his success in this regard to his development of excellent relations with local industry and the Omani research council (TRC). The common aim of these research projects is to develop vision-based unmanned vehicles to solve real life problems such as oil spill in seawater.
He has published more than 45 peer-reviewed papers in reputable journals and at international conferences. He is one of the founders of the “Embedded & Interconnected Vision Systems” (EIVS) lab at Sultan Qaboos University, which was inaugurated this March and funded by BP Oman. The lab hosts equipment for Embedded Vision Systems, Artificial Intelligence (UVS / Robotics), and IoT.
The College of Computing’s Department of Applied Computing invites the campus community to a lecture by MERET faculty candidate Muhammad Fahad on Thursday, April 9, 2020, at 3:30 p.m., via an online Zoom meeting. His talk is titled, “Motion Planning and Control of Autonomous Mobile using Model Free Method.”
Dr. Fahad currently works as a robotics engineer at National Oil Well Varco. He received his M.S. and Ph.D. in electrical engineering from Stevens Institute of Technology, Hoboken, NJ, and his B.S in EE at University of Engineering and Technology, Lahore, Pakistan.
Fahad has extensive experience designing control and automation systems for the process industry using traditional control methods and robots. His research interests include cooperative distributed localization, human robot interaction (HRI), deep reinforcement learning (DRL), deep inverse reinforcement learning (DIRL) and generative adversarial imitation learning (GAIL), simulation tools design, parallel simulation frameworks and multi-agent learning.
Lecture Abstract. Robots are playing an increasingly important part in our daily lives. This increasing involvement of robots in our everyday lives has highlighted the importance of human-robot interaction, specifically, robot navigation of environments occupied by humans, such as offices, malls and airports. Navigation in complex environments is an important research topic in robotics.
The human motion model consists of several complex behaviors that are difficult to capture using analytical models. Existing analytical models, such as the social force model, although commonly used, are unable to generate realistic human motion and do not fully capture behaviors exhibited by humans. These models are also dependent on various parameters that are required to be identified and customized for each new simulation environment.
Artificial intelligence has received booming research interest in recent years. Solving problems that are easy for people to perform but difficult to describe formally is one of the main challenges for artificial intelligence. The human navigation problem falls directly in this category, where it is hard to define a universal set of rules to navigate in an environment with other humans and static obstacles.
Reinforcement learning has been used to learn model-free navigation, but it requires a reward function that captures the behaviors intended to be inculcated in the learned navigation policy. Designing such a reward function for human like navigation is not possible due to complex nature of human navigation behaviors. The speaker proposes to use measured human trajectories to learn both the reward function and navigation policy that drives the human behavior.
Using a database of real-world human trajectories–collected over a period of 90 days inside a mall–we have developed a deep inverse reinforcement learning approach that learns the reward function capturing human motion behaviors. Further, this dataset was visualized in a robot simulator to generate 3D sensor measurement using a simulated LIDAR sensor onboard the robot. A generative adversarial imitation learning based method is developed to learn the human navigation policy using these human trajectories as expert demonstration. The learned navigation policy is shown to be able to replicate human trajectories both quantitatively, for similarity in traversed trajectories, and qualitatively, in the ability to capture complex human navigation behaviors. These navigation behaviors include leader follower behavior, collision avoidance behavior, and group behavior.