Faculty Candidate Fan Chen to Present Lecture February 10

The Colleges of Computing and Engineering invite the campus community to a lecture by faculty candidate Fan Chen, Monday, February 10, 2020, at 3:00 p.m., in Chem. Sci. 102. Chen’s talk is titled, “Efficient Hardware Acceleration of Unsupervised Deep Learning.”

Chen is a Ph.D. candidate in the Department of Electrical and Computer Engineering, Duke University, where she is advised by Professor Yiran Chen and Professor Hai “Helen” Li. Her research interests include computer architecture, emerging nonvolatile memory technologies, and hardware accelerators for machine learning. Fan won the Best Paper Award and the Ph.D. forum Best Poster Award at ASP-DAC 2018. She is a recipient of the 2019 Cadence Women in Technology Scholarship.

Abstract: Recent advances in deep learning are at the core of the latest revolution in various artificial intelligence (AI) applications including computer vision, autonomous systems, medicine, and other key aspects of human life. The current mainstream supervised learning relies heavily on the availability of labeled training data, which is often prohibitively expensive to collect and accessible to only a few industry giants. The unsupervised learning algorithm represented by Generative Adversarial Networks (GAN) is seen as an effective technique to obtain a learning representation from unlabeled data. However, the effective execution of GANs poses a major challenge to the underlying computing platform.

In her talk, Chen will discuss her work that devises a comprehensive full-stack solution for enabling GAN training in emerging resistive memory based main memory. A zero-free dataflow and pipelined/parallel training method is proposed to improve resource utilization and computation efficiency. Hao will also introduce an inference accelerator that enables developed deep learning models to run on edge devices with limited resources. Finally, Hao’s lecture will discuss her vision of incorporating hardware acceleration for emerging compact deep learning models, large-scale decentralized training models, and other application areas.

Download

Comments Closed