TY - GEN
T1 - DPIS
T2 - 28th ACM SIGSAC Conference on Computer and Communications Security, CCS 2022
AU - Wei, Jianxin
AU - Bao, Ergute
AU - Xiao, Xiaokui
AU - Yang, Yin
N1 - Publisher Copyright:
© 2022 Owner/Author.
PY - 2022/11/7
Y1 - 2022/11/7
N2 - Nowadays, differential privacy (DP) has become a well-accepted standard for privacy protection, and deep neural networks (DNN) have been immensely successful in machine learning. The combination of these two techniques, i.e., deep learning with differential privacy, promises the privacy-preserving release of high-utility models trained with sensitive data such as medical records. A classic mechanism for this purpose is DP-SGD, which is a differentially private version of the stochastic gradient descent (SGD) optimizer commonly used for DNN training. Subsequent approaches have improved various aspects of the model training process, including noise decay schedule, model architecture, feature engineering, and hyperparameter tuning. However, the core mechanism for enforcing DP in the SGD optimizer remains unchanged ever since the original DP-SGD algorithm, which has increasingly become a fundamental barrier limiting the performance of DP-compliant machine learning solutions. Motivated by this, we propose DPIS, a novel mechanism for differentially private SGD training that can be used as a drop-in replacement of the core optimizer of DP-SGD, with consistent and significant accuracy gains over the latter. The main idea is to employ importance sampling (IS) in each SGD iteration for mini-batch selection, which reduces both sampling variance and the amount of random noise injected to the gradients that is required to satisfy DP. Although SGD with IS in the non-private setting has been well-studied in the machine learning literature, integrating IS into the complex mathematical machinery of DP-SGD is highly non-trivial; further, IS involves additional private data release which must be protected under differential privacy, as well as computationally intensive gradient computations. DPIS addresses these challenges through novel mechanism designs, fine-grained privacy analysis, efficiency enhancements, and an adaptive gradient clipping optimization. Extensive experiments on four benchmark datasets, namely MNIST, FMNIST, CIFAR-10 and IMDb, involving both convolutional and recurrent neural networks, demonstrate the superior effectiveness of DPIS over existing solutions for deep learning with differential privacy.
AB - Nowadays, differential privacy (DP) has become a well-accepted standard for privacy protection, and deep neural networks (DNN) have been immensely successful in machine learning. The combination of these two techniques, i.e., deep learning with differential privacy, promises the privacy-preserving release of high-utility models trained with sensitive data such as medical records. A classic mechanism for this purpose is DP-SGD, which is a differentially private version of the stochastic gradient descent (SGD) optimizer commonly used for DNN training. Subsequent approaches have improved various aspects of the model training process, including noise decay schedule, model architecture, feature engineering, and hyperparameter tuning. However, the core mechanism for enforcing DP in the SGD optimizer remains unchanged ever since the original DP-SGD algorithm, which has increasingly become a fundamental barrier limiting the performance of DP-compliant machine learning solutions. Motivated by this, we propose DPIS, a novel mechanism for differentially private SGD training that can be used as a drop-in replacement of the core optimizer of DP-SGD, with consistent and significant accuracy gains over the latter. The main idea is to employ importance sampling (IS) in each SGD iteration for mini-batch selection, which reduces both sampling variance and the amount of random noise injected to the gradients that is required to satisfy DP. Although SGD with IS in the non-private setting has been well-studied in the machine learning literature, integrating IS into the complex mathematical machinery of DP-SGD is highly non-trivial; further, IS involves additional private data release which must be protected under differential privacy, as well as computationally intensive gradient computations. DPIS addresses these challenges through novel mechanism designs, fine-grained privacy analysis, efficiency enhancements, and an adaptive gradient clipping optimization. Extensive experiments on four benchmark datasets, namely MNIST, FMNIST, CIFAR-10 and IMDb, involving both convolutional and recurrent neural networks, demonstrate the superior effectiveness of DPIS over existing solutions for deep learning with differential privacy.
KW - deep learning
KW - differential privacy
KW - importance sampling
UR - http://www.scopus.com/inward/record.url?scp=85143086969&partnerID=8YFLogxK
U2 - 10.1145/3548606.3560562
DO - 10.1145/3548606.3560562
M3 - Conference contribution
AN - SCOPUS:85143086969
T3 - Proceedings of the ACM Conference on Computer and Communications Security
SP - 2885
EP - 2899
BT - CCS 2022 - Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security
PB - Association for Computing Machinery
Y2 - 7 November 2022 through 11 November 2022
ER -