Proving Differential Privacy via Relational Types
Danfeng Zhang (Pennsylvania State University)
Differential privacy provides a mathematical definition for the privacy loss to individuals when aggregated data is released. Unfortunately, the growing popularity of differential privacy is accompanied by an increase in the development of incorrect algorithms. Hence, formal verification of differential privacy is critical to its success.
In this talk, I will present LightDP, a simple imperative language for proving differential privacy for sophisticated algorithms. LightDP is built on a proof technique, called randomness alignment, which can be automated via a relational type system assuming little annotation in the source code. The type system is powerful enough to verify sophisticated algorithms, such as Sparse Vector, where the composition theorem falls short. Moreover, the type system converts differential privacy into a safety property in a target program where the privacy cost is explicit. This novel feature makes it possible to verify the target program by off-the-shelf verification tools. Finally, I will present a recent extension of LightDP that enables the verification of Report Noisy Max based on relational types. With the extension, we show that various sophisticated algorithms can be type-checked and verified in seconds.