DPIS: An Enhanced Mechanism for Differentially Private SGD with Importance Sampling

Jianxin Wei, Ergute Bao, Xiaokui Xiao, Yin Yang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

9 Citations (Scopus)

Abstract

Nowadays, differential privacy (DP) has become a well-accepted standard for privacy protection, and deep neural networks (DNN) have been immensely successful in machine learning. The combination of these two techniques, i.e., deep learning with differential privacy, promises the privacy-preserving release of high-utility models trained with sensitive data such as medical records. A classic mechanism for this purpose is DP-SGD, which is a differentially private version of the stochastic gradient descent (SGD) optimizer commonly used for DNN training. Subsequent approaches have improved various aspects of the model training process, including noise decay schedule, model architecture, feature engineering, and hyperparameter tuning. However, the core mechanism for enforcing DP in the SGD optimizer remains unchanged ever since the original DP-SGD algorithm, which has increasingly become a fundamental barrier limiting the performance of DP-compliant machine learning solutions. Motivated by this, we propose DPIS, a novel mechanism for differentially private SGD training that can be used as a drop-in replacement of the core optimizer of DP-SGD, with consistent and significant accuracy gains over the latter. The main idea is to employ importance sampling (IS) in each SGD iteration for mini-batch selection, which reduces both sampling variance and the amount of random noise injected to the gradients that is required to satisfy DP. Although SGD with IS in the non-private setting has been well-studied in the machine learning literature, integrating IS into the complex mathematical machinery of DP-SGD is highly non-trivial; further, IS involves additional private data release which must be protected under differential privacy, as well as computationally intensive gradient computations. DPIS addresses these challenges through novel mechanism designs, fine-grained privacy analysis, efficiency enhancements, and an adaptive gradient clipping optimization. Extensive experiments on four benchmark datasets, namely MNIST, FMNIST, CIFAR-10 and IMDb, involving both convolutional and recurrent neural networks, demonstrate the superior effectiveness of DPIS over existing solutions for deep learning with differential privacy.

Original languageEnglish
Title of host publicationCCS 2022 - Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery
Pages2885-2899
Number of pages15
ISBN (Electronic)9781450394505
DOIs
Publication statusPublished - 7 Nov 2022
Event28th ACM SIGSAC Conference on Computer and Communications Security, CCS 2022 - Los Angeles, United States
Duration: 7 Nov 202211 Nov 2022

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Conference

Conference28th ACM SIGSAC Conference on Computer and Communications Security, CCS 2022
Country/TerritoryUnited States
CityLos Angeles
Period7/11/2211/11/22

Keywords

  • deep learning
  • differential privacy
  • importance sampling

Fingerprint

Dive into the research topics of 'DPIS: An Enhanced Mechanism for Differentially Private SGD with Importance Sampling'. Together they form a unique fingerprint.

Cite this