Month: April 2021

Dr. Ali Yekkehkhany to Present Talk May 6


Dr. Ali Yekkehkhany, a postdoctoral scholar at the University of California, Berkeley, will present a talk on Thursday, May 6, 2021, at 3:00 p.m.

He will discuss adversarial attacks on the computation of reinforcement learning and risk-aversion in games and online learning.

Dr. Yekkehkhany’s research interests include machine/reinforcement learning, queueing theory, applied probability theory and stochastic processes.

Join the virtual talk here.

Talk Title

Adversarial Reinforcement Learning, Risk-Averse Game Theory and Online Learning with Applications to Autonomous Vehicles and Financial Investments

Talk Abstract

In this talk, we discuss:

  • a) Adversarial attacks on the computation of reinforcement learning: The emergence of cloud, edge, and fog computing has incentivized agents to offload the large-scale computation of reinforcement learning models to distributed servers, giving rise to edge reinforcement learning (RL). By the inherently distributed nature of edge RL, the swift shift to this technology brings a host of new adversarial attack challenges that can be catastrophic in safety-critical applications. A natural malevolent attack could be to contaminate the RL computation such that the contraction property of the Bellman operator is undermined in the value/policy iteration methods. This can result in luring the agent to search among suboptimal policies without improving the true values of policies. We prove that under certain conditions, the attacked value/policy iteration methods converge to the vicinity of the optimal policy with high probability if the number of value/policy evaluation iterations is larger than a threshold that is logarithmic in the inverse of a desired precision.
  • b) Risk-aversion in games and online learning: The fast-growing market of autonomous vehicles, unmanned aerial vehicles, and fleets in general necessitates the design of smart and automatic navigation systems considering the stochastic latency along different paths in a traffic network. To our knowledge, the existing navigation systems including Google Maps, Waze, MapQuest, Scout GPS, Apple Maps, and others are based on minimizing the expected travel time, ignoring the path delay uncertainty. To put the travel time uncertainty into perspective, we model the decision making of risk-averse travelers in a traffic network by an atomic stochastic congestion game and propose three classes of risk-averse equilibria. We show that the Braess paradox may not occur to the extent presented originally and the price of anarchy can be improved, benefiting the society, when players travel according to risk-averse equilibria rather than the Wardrop/Nash equilibrium. Furthermore, we extend the idea of risk-aversion to online learning; in particular, risk-averse explore-then-commit multi-armed-bandits. We use data from the New York Stock Exchange (NYSE) to show that the classical mean-variance and conditional value at risk approaches can come short in addressing risk-aversion for financial investments. We introduce new venues to study risk-aversion by taking the probability distributions into account rather than the summarized statistics of distributions.

Biography

Ali Yekkehkhany is a postdoctoral scholar with the Department of Industrial Engineering and Operations Research, University of California, Berkeley. He received his PhD and MSc degrees in Electrical and Computer Engineering from the University of Illinois, Urbana-Champaign (UIUC) in 2020 and 2017, respectively, and BSc degree in Electrical Engineering from Sharif University of Technology in 2014.

He is the recipient of the “best poster award in recognition of high-quality research, professional poster, and outstanding presentation” in the 15th CSL Student Conference, 2020, and the “Harold L. Olesen award for excellence in undergraduate teaching by graduate students” in the 2019-2020 academic year at UIUC. He was chosen as “teachers ranked as excellent” twice and “teachers ranked as excellent and outstanding” twice at UIUC.

His research interests include machine/reinforcement learning, queueing theory, applied probability theory and stochastic processes.

Students Place in ICPC Programming Championships


A team of Michigan Tech students competed last week in the International Collegiate Programming Contest (ICPC) North America Division Championships, placing 28th out of 42 teams in the Central Division.

To qualify for the Championships, a Michigan Tech student team placed 14th out of more than 80 teams in the regional ICPC contest this February. Students on that team were Alex Gougeon (Software Engineering), Ben Wireman (Mathematics), and Dominika Bobik.

Students interested in the programming competitions are encouraged to contact Dr. Laura Brown, Computer Science. Additional programming contests and events take place throughout the year.

The International Collegiate Programming Contest is the premier world-wide, algorithmic programming contest for college students.

In ICPC competitions, teams of three students work to solve the most real-world problems efficiently and correctly. Teams represent their university in multiple levels of competition: regionals, divisionals, championships, and world finals.

Dr. Dukka KC, Wichita State, to Present Talk May 5


Dr. Dukka KC, Electrical Engineering and Computer Science, Wichita State University, will present a talk on Wednesday, May 5, 2021, at 3:00 p.m.

Dr. KC will discuss some past and ongoing projects in his lab related to machine learning/deep learning-based approaches for an important problem in Bioinformatics: protein post-translational modification.

Join the virtual talk here.

Talk Title

Bioinformatics as an emerging field of Data Science: Protein post-translation modification prediction using Deep Learning

Talk Abstract

In this talk, I will be presenting about some of the past and ongoing projects in my lab especially related to Machine Learning/Deep Learning based approaches for one of the important problems in Bioinformatics – protein post-translational modification.

Especially, I will focus on our endeavors to get away from manual feature extraction (hand-crafted feature extraction) from protein sequence, use of notion of transfer learning to solve problems where there is scarcity of labeled data in the field, and stacking/ensemble-based approaches.

I will also summarize our future plans for using multi-label, multi-task and multi-modal learning for the problem. I will highlight some of the ongoing preliminary works in disaster resiliency. Finally, I will provide my vision for strengthening data science related research, teaching, and service for MTU’s college of computing.

Biography

Dr. Dukka KC is the Director of Data Science Lab, Director of Data Science Efforts, Director of Disaster Resilience Analytics Center and Associate Professor of Electrical Engineering and Computer Science (EECS) in the Department of EECS at Wichita State University. His current efforts are focused on application of various computing/data science concepts including but not limited to Machine Learning, Deep Learning, HPC, etc. for elucidation of protein sequence, structure, function and evolution relationship among others.

He has received grant funds totaling $4.25M as PIs or Co-PIs, spanning 17 funded grants. He was the PI on the $499K NSF Excellence in Research project focused on developing Deep Learning based approaches for Protein Post-translational modification sites.

He received his B.E. in computer science in 2001, his M.Inf. in 2003 and his Ph.D. in Informatics (Bioinformatics) in 2006 from Kyoto University, Japan. Subsequently he did a postdoc at Georgia Institute of Technology working on refinement algorithms for protein structure prediction. He then moved to UNC-Charlotte and did another postdoc working on functional site predictions in proteins. He was a CRTA Fellow in National Cancer Institute at National Institutes of Health where he was working on intrinsically symmetric domains.

Prior to his arrival at WSU, he was associate professor and graduate program director in the Department of Computational Science and Engineering at North Carolina A&T State University.

Dr. KC has published more than 30 journal and 20 conference papers in the field and is associate editor for two leading journals (BMC Bioinformatics and Frontiers in Bioinformatics) in the field. He also dedicates much of his efforts to K-12 education, STEM workforce development, and increasing diversity in engineering and science.

Grad Students Take 6th Place in Navy’s AI Tracks at Sea Challenge

by Karen S. Johnson, Communications Director, College of Computing


The Challenge

Four Michigan Tech graduate students recently took 6th place in the U.S. Navy’s Artificial Intelligence (AI) Tracks at Sea Challenge, receiving a $6,000 prize.

The Challenge solicited software solutions to automatically generate georeferenced tracks of maritime vessel traffic based on data recorded from a single electro-optical camera imaging the traffic from a moving platform.

Each Challenge team was presented with a dataset of recorded camera imagery of vessel traffic, along with the recorded GPS track of a vessel of interest that is seen in the imagery.

Graduate students involved in the challenge were Zach DeKraker and Nicholas Hamilton, both Computer Science majors advised by Tim Havens; Evan Lucas, Electrical Engineering, advised by Zhaohui Wang; and Steven Whitaker, Electrical Engineering.

Submitted solutions were evaluated against additional camera data not included in the competition testing set in order to verify generalization of the solutions. Judging was based on track accuracy (70%) and overall processing time (30%).

“We never got our final score, but we were the “first runner up” team,” says Lucas. “Based on our testing before sending it, we think it worked well most of the time and occasionally tracked a seagull or the wrong boat.”

The total $200,000 prize was distributed among five winning teams, which submitted full working solutions, and three runners-up, which submitted partial working solutions.

The Challenge was sponsored by the Naval Information Warfare Center (NIWC) Pacific and the Naval Science, Technology, Engineering, and Mathematics (STEM) Coordination Office, and managed by the Office of Naval Research. Its goal was to engage with the workforce of tomorrow on challenging and relevant naval problems, with the immediate need to augment unmanned surface vehicles’ (USVs’) maritime contact tracking capability.

The Problem

“The problem presented was to find a particular boat in a video taken of a harbor, and track its GPS coordinates.,” says Zach DeKraker. “We were provided with samples of other videos along with the target boat’s GPS coordinates for that video, which we were able to use to come up with a mapping from pixels to GPS coordinates.”

“Basically, we wanted to track boats with a video camera,” adds ECE graduate student Steven Whitaker. “Our team used machine learning and computer vision to do this. At weekly meetings we brainstormed approaches to tackling the problem, and at regular work sessions, together we programmed it all and produced a white paper with the technical details.”

Whitaker says the competition tied in pretty closely to work the students have already done. “We had a good majority of the code already written. We just needed to fit everything together and add in a few more details and specialize it for the AI Tracks at Sea research,” he explains.

Competitions like this one often connect directly or indirectly with a student’s academic and career goals.

“It’s good to not be pigeon-holed, and to use our knowledge in a different scenario,” Steven Whitaker says of these opportunities. “This helps us remember that there are other things in the world other than our small section of research.”

Dividing Responsibilities

The team knew that there were two primary issues at hand. First, how can the pixel coordinates be translated into GPS coordinates? And second, how can the boat be located so that GPS pixel coordinates can be determined?

“Once we broke it down into these two subproblems, it became pretty clear how to solve each half,” DeKraker says. “Steven had already done a significant amount of work mapping pixel coordinates into GPS coordinates, so we had a pretty quick answer to subproblem one.”

AI Tracks at Sea Flowchart

The team met weekly to discuss their ideas for the project and compare and contrast how effective they would be as solutions to the problem at hand. Then, they got together on Fridays or during the weekends to work together on the project.

“Dr. Havens would come in to our weekly meetings and nudge us in the right direction or give tips on what we should do and what we should avoid,” Whitaker adds.

For subproblem two, after some discussion the group decided it was probably best to use a machine learning approach, as that promised the most significant gains for the least amount of effort, which was important given the tight schedule.

“We tried some different sub-projects independently and then worked together to combine the parts we thought worked best,” Evan Lucas says.

The Solution

To identify the boat and track its movement, the team used a simple neural network and a computer vision technique called optical flow, which made the analysis much faster and cleaner. They used a pre-built algorithm, adding a bit of optical flow so that the boat’s position didn’t have to be verified every time.

AI Tracks at Sea Neural Net Summary

“These two tools allowed us to find the pixel coordinates of the boat and turn them into GPS coordinates,” DeKraker says, whose primary role in the project was integrating the two tools and packaging it for testing.

“Part of my PhD is to map out a snowmobile’s GPS coordinates with a camera,” Whitaker says. “This is extremely similar to mapping out a boat’s GPS coordinates. I could even say that it was exactly the same. I don’t believe I’ll add anything new, but I’ve tweaked it to work for my research.”

Whitaker sums up the team’s division of responsibilities like this: “Evan detects all the boats in the picture; Nik detects which of those boats is our boat; Steven takes our boat position and converts it to GPS coordinates, Zach glued all of our pieces together.”

DeKraker says, “One of the things the judges stressed was the ease of implementing the solution. Since that falls under what I would consider user experience (UX) or user interface (UI), it was pretty natural for me to take these tasks on, having studied software engineering for my undergrad,” DeKraker says.

A primary focus was speed. “Using machine learning for object detection tends to be slow, so to mitigate that we used the boat detector only once every 5 seconds,” DeKraker explains.

“Most of the tracking was done using a very fast technique called optical flow, which looks at the difference between two consecutive frames of a video to track motion,” DeKraker says. “It tended to drift from the target though, so we decided on running the boat detector every 5 seconds to keep optical flow on target. “

“The end result is that our solution could run nearly in real-time,” he says. “The accuracy wasn’t the best, but given a little bit more time and more training data, the neural network could be significantly improved.”

AI Tracks at Sea Homography Transform

Zach DeKraker

DeKraker’s graduate studies focus heavily on various machine learning techniques, He says that this opportunity to integrate machine learning into our solution was a fantastic experience.

“First, it sounded like an interesting challenge. I don’t get to do a lot of software design these days, and this challenge sounded like a great opportunity to do just that,” he explains.

“Second, it looked like a great opportunity to build up my resume a little bit. Saying that you won thousands of dollars for your university in a nationwide competition sounds really good. And finally, I really wanted the chance to see a practical application of machine learning in action.”

DeKraker completed a BS in Software Engineering at Michigan Tech in 2018. He returned to Michigan Tech the next year to complete his master’s degree. He says the biggest reason he did so was to learn more about machine learning.

“Before embarking on this journey, I really didn’t know anything about it,” he says of machine learning. “Having this chance to actually solve a problem, to integrate a neural network into a fully realized boat tracker using nothing but a video helped me see how machine learning can be used practically, rather than merely understanding how it works.”

And although it was a fascinating exploration into the practical side of machine learning and computer vision, DeKraker says it’s rather tangential to his main research focus right now, which is on comparing different network architectures to evaluate which one performs best given particular data and the problem being solved.

DeKraker believes that the culture is the most magnetizing thing about Tech. “Everybody here is cut from the same cloth. We’re all nerds and proud of it,” he explains. “You can have a half-hour conversation with a complete stranger about singularities, the economics of fielding a fleet of star destroyers, or how Sting was forged.”

And the most appealing thing about Michigan Tech was its size. DeKraker says. “When I looked at a ranking of the top universities in Michigan, Tech was number 3, but still extremely small. It was a perfect blend of being a small but very good school.”

And he says the second-best thing about Tech is the location. “The Keweenaw is one of the most beautiful places on earth.”

DeKraker has many ideas about where he’d like to take his career. For instance, he’d love the chance to work for DARPA, Los Alamos National Laboratory, or NASIC. He also intends to commission into the Air Force in the next couple of years, “if they have a place for programmers like me.”

Evan Lucas

Evan Lucas is a PhD candidate in the Electrical Engineering department., advised by Zhaohui Wang. Lucas completed both a bachelor’s and master’s in Mechanical Engineering at Tech in 2012 and 2014,

Lucas, whose research interests are in applying machine learning methods to underwater acoustic communication systems, worked on developing a classifier to separate the boat of interest from the many other boats in the image. Although the subject of the competition is tangential to Lucas’s graduate studies, as computer vision isn’t his area, there was some overlap in general machine learning concepts. respectively.

“It sounded like a fun challenge to put together an entry and learn more about computer vision,” Lucas says. “Working with the rest of the team was a really good opportunity to learn from people who have experience making software that is used by other people.”

Following completion of his doctoral degree, hopefully in spring 2023, Lucas plans to return to industry in a research focused role that applies some of the work he did in his PhD.


Steven Whitaker

Steven Whitaker’s research interests are in machine learning and acoustics. He tracks and locates the position of on-ice vehicles, like snowmobiles, based on acoustics. He says he has used some of the results from this competition project in his PhD research.

Whitaker’s machine learning research is experiment-based., and that’s why he chose Michigan Tech. “There aren’t many opportunities in academia to do experiment-based research,” he says. “Most machine learning is very software-focused using pre-made datasets. I love doing the experiments myself. Research is fun. I enjoy getting paid to do what I normally would do in my free time.”

In 2019, Whitaker completed his BS in Electrical Engineering at Michigan Tech. He expects to complete his master’s degree in Electrical Engineering at the end of the summer 2021 semester, and his PhD in summer 2022. His advisors are Tim Havens and Andrew Barnard.

Whitaker would love to be a university professor one day, but first he wants to work in industry.


Background Info

Timothy Havens is associate dean for research, College of Computing; the William and Gloria Jackson Associate Professor of Computer Systems; and director of the Institute of Computing and Cybersystems (ICC). His research interests are in pattern recognition and machine learning, signal and image processing, sensor and data fusion, heterogeneous data mining, and explosive hazard detection.

Michael Roggeman is a professor in the Electrical and Computer Engineering department. His research interests include optics, image reconstruction and processing, pattern recognition, and adaptive and atmospheric optics.

Zhaohui Wang is an associate professor in the Electrical and Computer Engineering department. Her research interests are in communications, signal processing, communication networks, and network security, with an emphasis on underwater acoustic applications.

The Naval Information Warfare Center (NIWC) Pacific and the Naval Science, Technology, Engineering, and Mathematics (STEM) Coordination Office, managed by the Office of Naval Research are conducting the Artificial Intelligence (AI) Tracks at Sea challenge.

View more details about the Challenge competition here: https://www.challenge.gov/challenge/AI-tracks-at-sea/

Watch a Navy webinar about the Challenge here: https://www.youtube.com/watch?v=MjZwvCX4Tx0.

Challenge.gov is a web platform that assists federal agencies with inviting ideas and solutions directly from the public, or “crowd.” This is called crowdsourcing, and it’s a tenet of the Challenge.gov program. The website enables the U.S. government to engage citizen-solvers in prize competitions for top ideas and concepts as well as breakthrough software, scientific and technology solutions that help achieve their agency missions.

This site also provides a comprehensive toolkit, a robust repository of considerations, best practices, and case studies on running public-sector prize competitions as developed with insights from prize experts across government.

New Course: Applied Machine Learning


Summary

  • Course Number: 84859, EET 4996-01
  • Class Times: T/R, 9:30-10:45 am
  • Location: EERC 0723
  • Instructor: Dr. Sidike Paheding
  • Course Levels: Graduate, Undergraduate
  • Prerequisite: Python Programming and basic knowledge of statistics.
  • Preferred knowledge: Artificial Intelligence (CS 4811) or Data Mining (CS4821) or Intro to Data Sciences (UN 5550)

Course Description/Overview

Rapid growth and remarkable success of machine learning can be witnessed by tremendous advances in technology, contributing to the fields of healthcare, finance, agriculture, energy, education, transportation and more. This course will emphasize on intuition and real-world applications of Machine Learning (ML) rather than statistics behind it. Key concepts of some popular ML techniques, including deep learning, along with hands-on exercises will be provided to students. By the end of this course, students will be able to apply a variety of ML algorithms to practical

Applications Covered

  • Object Detection
  • Digital Recognition
  • Face Recognition
  • Self-Driving Cars
  • Medical Image Segmentation
  • Covid-19 Prediction
  • Spam Email Detection
  • Spectral Signal Categorization

Tools Covered

  • Python
  • scikit learn
  • TensorFlow
  • Keras
  • Open CV
  • pandas
  • matplotlib
  • NumPy
  • seaborn
  • ANACONDA
  • jupyter
  • SPYDER

Download the course description flyer:

Volunteers Needed for Augmented Reality Study

by Department of Computer Science

We are looking for volunteers to take part in a study exploring how people may interact with future Augmented Reality (AR) interfaces. During the study, you will record videos of yourself tapping on a printed keyboard. The study takes approximately one hour, and you will be paid $15 for your time. You will complete the study at your home.

To participate you must meet the following requirements:

  • You must have access to an Android mobile phone
  • You must have access to a printer
  • You must be a fluent speaker of English
  • You must be 18 years of age or older
  • You must live in the United States

If you would like to take part, please contact rhabibi@mtu.edu

Dr. Qun Li to Present Lecture April 23, 3 pm


The Department of Computer Science will present a lecture by Dr. Qun Li on Friday, April 23, 2021, at 3:00 p.m. Dr. Li is a professor in the computer science department at William and Mary university. The title of his lecture is, “Byzantine Fault Tolerant Distributed Machine Learning.”

Lecture Title

Byzantine Fault Tolerant Distributed Machine Learning

Lecture Abstract

Training a deep learning network requires a large amount of data and a lot of computational resources. As a result, more and more deep neural network training implementations in industry have been distributed on many machines. They can also preserve the privacy of the data collected and stored locally, as in Federated Deep Learning.

It is possible for an adversary to launch Byzantine attacks to a distributed or federated deep neural network training. That is, some participating machines may behave arbitrarily or maliciously to deflect the training process. In this talk, I will discuss our recent results on how to make distributed and federated neural network training resilient to Byzantine attacks. I will first show how to defend against Byzantine attacks in a distributed stochastic gradient descent (SGD) algorithm, which is the core of distributed neural network training. Then I will show how we can defend against Byzantine attacks in Federated Learning, which is quite different from distributed training.

Article by Sidike Paheding in Elsevier’s Remote Sensing of Environment


An article by Dr. Sidike Paheding, Applied Computing, has been accepted for publication in the Elsevier journal, Remote Sensing of Environment, a top journal with an impact factor of 9.085. The journal is ranked #1 in the field of remote sensing, according to Google Scholar.

The paper, “Estimation of root zone soil moisture from ground and remotely sensed soil information with multisensor data fusion and automated machine learning,” will be published in Volume 260, July 2021 of the journal. Read and download the article here.

Highlights

  • A machine learning approach to estimation of root zone soil moisture is introduced.
  • Remotely sensed optical reflectance is fused with physical soil properties.
  • The machine learning models well capture in situ measured root zone soil moisture.
  • Model estimates improve when measured near-surface soil moisture is used as input.

Paheding’s co-authors are:

  • Ebrahim Babaeian, Assistant Research Professor, Environmental Science, University of Arizona, Tucson
  • Vijay K. Devabhaktuni, Professor of Electrical Engineering, Department Chair, Purdue University Northwest, Hammond, IN
  • Nahian Siddique, Graduate Student, Purdue University Northwest
  • Markus Tuller, Professor, Environmental Science, University of Arizona

Abstract

Root zone soil moisture (RZSM) estimation and monitoring based on high spatial resolution remote sensing information such as obtained with an Unmanned Aerial System (UAS) is of significant interest for field-scale precision irrigation management, particularly in water-limited regions of the world. To date, there is no accurate and widely accepted model that relies on UAS optical surface reflectance observations for RZSM estimation at high spatial resolution. This study is aimed at the development of a new approach for RZSM estimation based on the fusion of high spatial resolution optical reflectance UAS observations with physical and hydraulic soil information integrated into Automated Machine Learning (AutoML). The H2O AutoML platform includes a number of advanced machine learning algorithms that efficiently perform feature selection and automatically identify complex relationships between inputs and outputs. Twelve models combining UAS optical observations with various soil properties were developed in a hierarchical manner and fed into AutoML to estimate surface, near-surface, and root zone soil moisture. The addition of independently measured surface and near-surface soil moisture information to the hierarchical models to improve RZSM estimation was investigated. The accuracy of soil moisture estimates was evaluated based on a comparison with Time Domain Reflectometry (TDR) sensors that were deployed to monitor surface, near-surface and root zone soil moisture dynamics. The obtained results indicate that the consideration of physical and hydraulic soil properties together with UAS optical observations improves soil moisture estimation, especially for the root zone with a RMSE of about 0.04 cm3 cm−3. Accurate RZSM estimates were obtained when measured surface and near-surface soil moisture data was added to the hierarchical models, yielding RMSE values below 0.02 cm3 cm−3 and R and NSE values above 0.90. The generated high spatial resolution RZSM maps clearly capture the spatial variability of soil moisture at the field scale. The presented framework can aid farm scale precision irrigation management via improving the crop water use efficiency and reducing the risk of groundwater contamination.


Remote Sensing of Environment (RSE) serves the Earth observation community with the publication of results on the theory, science, applications, and technology of remote sensing studies. Thoroughly interdisciplinary, RSE publishes on terrestrial, oceanic and atmospheric sensing. The emphasis of the journal is on biophysical and quantitative approaches to remote sensing at local to global scales.

AI, Mobile Security Grad-level Research Assistant Needed

Dr. Xiaoyong (Brian) Yuan and Dr. Bo Chen are seeking an hourly paid graduate research assistant to work in the areas of artificial intelligence and mobile security. The project is expected to begin Summer 2021 (5/10/2021).

Preferred Qualifications:
1.     Passion for research in artificial intelligence and mobile security.
1.     Familiar with Android OS and Android app development.
2.     Basic knowledge of machine learning and deep learning.
3.     Solid programming skills in Java, Python, or related programming languages. 
4.     Experience with popular deep learning frameworks, such as Pytorch and Tensorflow is a plus.

To Apply: Please send a resume and a transcript to Dr. Yuan (xyyuan@mtu.edu).