New paper at PETS’22!

Our paper "3LegRace: Privacy-Preserving DNN Training over TEEs and GPUs” is accepted for publication in The 22nd Privacy Enhancing Technologies Symposium

Leveraging parallel hardware (e.g. GPUs) for deep neural network (DNN) training brings high computing performance. However, it raises data privacy concerns as GPUs lack a trusted environment to protect the data. Trusted execution environments (TEEs) have emerged as a promising solution to achieving privacy-preserving learning. Unfortunately, TEEs’ limited computing power renders them not comparable to GPUs in performance. To mitigate the trade-off between privacy, computing performance, and model accuracy, we pro- pose an asymmetric model decomposition framework, AsymML, to (1) accelerate training using parallel hardware; and (2) achieve a strong privacy guarantee using TEEs and differential privacy (DP) with much less ac- curacy compromised. By exploiting the low-rank characteristics in the training data and the intermediate features, AsymML asymmetrically decomposes the data and the intermediate features into low-rank and residual parts. With the decomposed data, the target DNN model is accordingly split into a trusted and an untrusted part. The trusted part performs computations on low-rank data, with low compute and memory costs. The untrusted part is fed with residuals perturbed by very small noise. Privacy, computing performance, and model accuracy are well managed by respective delegating the trusted and the untrusted part to TEEs and GPUs. We provide a formal DP guarantee that demonstrates that, for the same privacy guarantee, combining asymmetric data decomposition and DP requires much smaller noise compared to solely using DP without decomposition. This improves the privacy-utility trade-off significantly compared to using only DP methods without decomposition. Furthermore, we present a rank-bound analysis showing that the low-rank structure is preserved after each layer across the entire model. Our extensive evaluations on DNN models show that AsymML delivers 7.6× speedup in training compared to the TEE-only executions while ensuring privacy. We also demonstrate that AsymML is effective in protect- ing data under common attacks such as model inversion or gradients attacks.

 
Cibus Consulting

Based in Southern California, we are a branding and design agency specializing in creating full-scale digital solutions for our clients. Our core services include Website Design, SEO & Ads Management, Digital Marketing, IT Implementation, and Business Development. We use our deep industry knowledge, rigorous analysis, and data-driven insights to help clients modernize their business operations and unlock their greatest earnings potential.

https://www.cibusconsulting.com
Previous
Previous

Thomas M. Cover Dissertation Award!

Next
Next

Congratulations to Ph.D. graduates from our lab!