Gaussian Differential Privacy (GDP)
Jinshuo Dong (University of Pennsylvania)
Differential privacy has seen remarkable success in the past decade. But it also has some well known weaknesses: notably, it does not tightly handle composition. This weakness has inspired several recent relaxations of differential privacy based on Renyi divergences. We propose an alternative relaxation of differential privacy, which we term "f-DP", which has a number of nice properties and avoids some of the difficulties associated with divergence based relaxations. First, it preserves the hypothesis testing interpretation of differential privacy, which makes its guarantees easily interpretable. It allows for lossless reasoning about composition and post-processing, and notably, a direct way to analyze privacy amplification by subsampling. We define a canonical single parameter family of definitions within our class is termed "Gaussian Differential Privacy", based on the hypothesis testing region defined by two Gaussian distributions. We show that this family is focal by proving a central limit theorem, which shows that the privacy guarantees of -any- hypothesis-testing based definition of privacy (including differential privacy) converges to Gaussian differential privacy in the limit under composition. This central limit theorem also gives a tractable analysis tool. We demonstrate the use of the tools we develop by giving an improved analysis of the privacy guarantees of noisy stochastic gradient descent. Based on joint work with Aaron Roth and Weijie Su.