Day: February 14, 2020

Faculty Candidate Songtao Lu to Present Lecture March 2

The Colleges of Computing and Engineering invite the campus community to a lecture by faculty candidate Songtao Lu, Monday, March 2, 2020, at 3:00 p.m., in Chem Sci 102. Lu’s talk is titled, “Nonconvex Min-Max Optimization for Machine Learning.”

Songtao Lu is an AI resident at IBM Research AI, IBM Thomas J. Watson Research Center. His research interests include optimization, artificial intelligence, machine learning, and neural networks. Lu received his Ph.D. degree in electrical and computer engineering from Iowa State University in 2018, and he was a post-doctoral associate with the ECE department at the University of Minnesota Twin Cities from 2018 to 2019.

We live in an era of data explosion. Rapid advances in sensor, communication, and storage technologies have made data acquisition more ubiquitous than ever before. Making sense of data at such a scale is expected to bring ground-breaking advances across many industries and disciplines. 

However, to effectively handle data of such scale and complexity– and to better extract information from quintillion of bytes of data for inference, learning, and decision-making—increasingly complex mathematical models are needed. These models, often highly nonconvex, unstructured, and with millions or even billions of variables, render existing solution methods inapplicable.

Lu will present work that designs accurate, scalable, and robust algorithms for solving nonconvex machine learning problems. He will discuss the theoretical and practical properties of a class of gradient-based algorithms for solving a popular family of min-max non-convex problems.

Finally, Lu will showcase the practical performance of these algorithms in applications such as poisoning attacks to neural nets, decentralized neural nets training, and constrained Markov decision processes. He will briefly introduce ideas for the possible extension of his framework to other areas.

Lu is a recipient of the Iowa State University Graduate and Professional Student Senate Research Award (2015), the Research Excellence Award from the Graduate College of Iowa State (2017), and student travel awards from ICML and AISTATS.


Faculty Candidate Tao Li to Present Lecture February 27

The Colleges of Computing and Engineering invite the campus community to a lecture by faculty candidate Tao Li on Thursday, February 27, 2020, at 3:00 p.m. in. Fisher 325. His talk is titled, “Security and Privacy in the Era of Artificial Intelligence of Things.”

Tao Li is a Ph.D. candidate in computer engineering in the School of Electrical, Computer and Energy Engineering at Arizona State University. He received an M.S. in somputer science and technology from Xi’an Jiaotong University in 2015, and a B.E. in software engineering from Hangzhou Dianzi University in 2012. His research focuses on cybersecurity and privacy, indoor navigation systems for visually impaired people, and mobile computing. 

AIoT—Artificial Intelligence of Things (AIoT)—combines artificial intelligence (AI) technologies with the Internet of Things (IoT) infrastructure. By 2025, the number of IoT devices in use is estimated to reach 75 billion.

And as AIoT plays an incrreasingly significant role in our everyday lives, the security and privacy of AIoT has become a critical concern for the research community and the public and private sectors. 

In his talk, Li will introduce his recent research focused on the protection of AIoT devices. A novel system that can automatically lock mobile devices against data theft will be introduced, and a touchscreen key stroke attack (based on a video capturing the victim’s eye movements) will be discussed. Li will briefly introduce additional projects of interest.

Li has served as a reviewer for journals and conferences including IEEE TMC, IEEE TWC, ACM MobiHoc, and IEEE INFOCOM.


Faculty Candidate Brian Yuan to Present Lecture February 26

The Colleges of Computing and Engineering invite the campus community to a lecture by faculty candidate Xiaoyong (Brian) Yuan on Wednesday, February 26, 2020, at 3:00 p.m. in Chem Sci 101. Yuan’s talk is titled, “Secure and Privacy: Preserving Machine Learning, A Case Study on Model Stealing Attacks Against Deep Learning.”

Brian Yuan is a computer science Ph.D. candidate at the University of Florida. He received an M.E. degree in computer engineering from Peking University in 2015, and a B.S. degree in mathematics from Fudan University in 2012. His research interests span the fields of deep learning, machine learning, security and privacy, and cloud computing.

In his talk, Yuan will provide an overview of security and privacy issues in deep learning, then focus on his recent research on a data-agnostic model stealing attack against deep learning.  He will conclude with a discussion of some future research directions to address security and privacy concerns in deep learning and potential countermeasures.  

Due to recent breakthroughs, machine learning, especially deep learning, is pervasively serving areas such as autonomous driving, game playing, and virtual assistants. Recently however, significant security and privacy concerns have been raised in deploying deep learning algorithms. 

On one hand, deep learning algorithms are fragile and easily fooled by attacks. For example, an imperceptible perturbation on a traffic sign can mislead the autonomous driving systems. On the other hand, with the increasing use of deep learning in personalization, virtual assistants, and healthcare, deep learning models may expose users’ sensitive and confidential information. 

With important business value, deep learning models have become essential components in various commercialized machine learning services, such as Machine Learning as a Service (MLaaS). Model stealing attacks aim to extract a functionally equivalent copy of deep learning models and cause a breach of confidentiality and integrity of deep learning algorithms. Most existing model stealing attacks require private training data or auxiliary data from service providers, which significantly limits the attacking impact and practicality. Yuan proposes a much more practical attack without the hurdle of training data, and its effectiveness will be showcased in several widely used datasets. 

Yuan has published 17 papers in top-tier journals and conferences, such as IEEE Transactions on Neural Networks and Learning Systems (TNNLS) and the AAAI Conference on Artificial Intelligence (AAAI). He has served as reviewer for several leading journals and conferences, such as IEEE Transactions on Neural Networks and Learning Systems (TNNLS), International Conference on Learning Representations (ICLR), IEEE Transactions on Dependable and Secure Computing (TDSC), and IEEE Transactions on Parallel and Distributed Systems (TPDS).

Read the blog post here: