PhD Student Shiwei Ding, Computer Science, to Present Dissertation Proposal


College of Computing identifier

PhD student Shiwei Ding, Department of Computer Science, will present his dissertation proposal on Thursday, June 20, 2024, at 8 am in Rekhi 101 and via Zoom. The title of Ding’s proposal is, “Efficient and privacy-guaranteed training/testing for Deep Neural Networks.”

Ding is advised by Professor Zhenlin Wang, Computer Science, and Adjunct Professor Xiaoyong (Brian) Yuan, Clemson University.

Join the Zoom meeting.

Proposal Abstract

Deep neural network (DNN) learning has greatly improved and developed nowadays. But as the network becomes larger, single hardware is weak to support the entire large model’s training without other hardware’s support. Meanwhile, with the private sensitive datasets (medical info, facial info, etc.) applying DNNs to train and analyze, the security issues during the training and inference phase has become severe and critical.

One of the popular inference frameworks is collaborative inference. Collaborative inference has been a promising solution to enable resource-constrained edge devices to perform inference using state-of-the-art deep neural networks (DNNs). Existing perturbation and cryptography techniques are inefficient and unreliable in defending against model inversion attacks (MIAs) while performing accurate inference. Therefore, we develop a privacy-oriented pruning framework to balance privacy, efficiency, and utility of collaborative inference.

In recent years, federated learning has been widely used and suffers from severe privacy vulnerabilities due to the available snapshots of network updates throughout the training. To enable fully privacy-preserving FL and ensure both data and model privacy, Trust Execution Environments (TEEs) have become promising solutions by isolating code and data within a secure memory enclave. To address the memory limitation of TEE, we split the entire neural network and distribute subnets to clients based on their TEE memory budgets and enable end-to-end training for global optimization by propagating knowledge among client-side TEEs.

Nowadays, fine-tuning the transformer base foundation model is very common. However, fine-tuning these foundation models under memory-constrained devices remains a significant challenge. Therefore, we develop an efficient fine-tuning framework, Distributed Dynamic Fine-Tuning (D2FT), that dynamically selects attention modules in forward and backward propagation via three proposed selection options. This method significantly reduces the computational workload for fine-tuning foundation models. Moreover, to further improve workload balances across devices, we formulate operation scheduling as a multiple knapsack optimization problem and optimize the fine-tuning schedule strategy using dynamic programming.