No abstract available.
Monday, January 28th, 2019
In this tutorial, we will start with some basic mechanisms in Differential Privacy, and build up to training of complex differentially private classifiers. We will also cover some results in unsupervised learning, and some recent results on hyperparameter selection. Finally, we will discuss the problem of learning in the local model of differential privacy.
In this tutorial, we will start with some basic mechanisms in Differential Privacy, and build up to training of complex differentially private classifiers. We will also cover some results in unsupervised learning, and some recent results on hyperparameter selection. Finally, we will discuss the problem of learning in the local model of differential privacy.
In this tutorial, we will start with some basic mechanisms in Differential Privacy, and build up to training of complex differentially private classifiers. We will also cover some results in unsupervised learning, and some recent results on hyperparameter selection. Finally, we will discuss the problem of learning in the local model of differential privacy.
Tuesday, January 29th, 2019
Across a range of subfields, computer scientists have risen to society’s call to safeguard privacy through technology, with impressive results. As the fields of privacy science and engineering mature, it’s worth taking a moment to ask how well the underlying ideas of privacy that explicitly and implicitly motivate this work map onto the ideas of privacy that stirred these calls, in the first place — that is, ethically and socially meaningful. In my lecture, I will argue that the theory of Contextual Integrity (CI) offers an account of privacy that is meaningful in these senses. Further, as an account that is accessible to formal representation, CI may serve to bridge scientific efforts, on the one hand, with outcomes that serve ethical and societal values, on the other. Time permitting, I will describe some past and ongoing applications of contextual integrity, as well as hopes for future work.
We aim to present a statistician’s and a computer scientist’s perspectives on statistical inference in the context of privacy. We will consider questions of (1) how to perform valid statistical inference using differentially private data or summary statistics, and (2) how to design optimal formal privacy mechanisms and inference procedures. We will discuss what we believe are key theoretical and practical issues and tools. Our examples will include point estimation and hypothesis testing problems and solutions, and synthetic data.
We aim to present a statistician’s and a computer scientist’s perspectives on statistical inference in the context of privacy. We will consider questions of (1) how to perform valid statistical inference using differentially private data or summary statistics, and (2) how to design optimal formal privacy mechanisms and inference procedures. We will discuss what we believe are key theoretical and practical issues and tools. Our examples will include point estimation and hypothesis testing problems and solutions, and synthetic data.
We aim to present a statistician’s and a computer scientist’s perspectives on statistical inference in the context of privacy. We will consider questions of (1) how to perform valid statistical inference using differentially private data or summary statistics, and (2) how to design optimal formal privacy mechanisms and inference procedures. We will discuss what we believe are key theoretical and practical issues and tools. Our examples will include point estimation and hypothesis testing problems and solutions, and synthetic data.
Wednesday, January 30th, 2019
This talk will provide a social survey statisticians perspective on (differential) privacy. It is structured in four parts. The first part covers typical social science research questions and data types used to answer such questions. The second part provides a survey of typical data analyses techniques and typical work streams social scientists engage in when analyzing data. The third part explains in more detail a typical data collection and how design decisions will be affected if the field moves forward with differential privacy. I will close with some thoughts about unanticipated side effects and some general questions related to the implementation of differential privacy.
A counting query on a database returns the number of rows of the database that satisfy a given predicate. Despite being one of the simplest classes of aggregate database queries, they are a workhorse of statistical analysis. They capture many natural queries of practical interest, like marginal and range queries, and are a very useful primitive in building more complicated analyses. Counting queries can be further generalized to linear queries, which allow for weighted counting.
In this tutorial, we will cover algorithmic techniques for answering linear queries under the constraints of differential privacy. We will start with basic noise-adding mechanisms. Then we will explain ways to adapt the noise to a desired query workload, both in instance independent ways, using tools from optimization and geometry, and in instance-dependent ways, using algorithms that learn the database as they answer the queries. The tutorial will cover theoretical tools used for answering linear queries and understanding their error properties, as well as empirical results arising in the context of practical use cases.
A brief tutorial and demo on using TF Privacy, Tensorflow's open-source library for differentially private stochastic gradient descent (DP-SGD). We will show how to easily write code that implements DP-SGD, talk about practical considerations in training, and show how to analyze the privacy parameters of the resulting model.
A counting query on a database returns the number of rows of the database that satisfy a given predicate. Despite being one of the simplest classes of aggregate database queries, they are a workhorse of statistical analysis. They capture many natural queries of practical interest, like marginal and range queries, and are a very useful primitive in building more complicated analyses. Counting queries can be further generalized to linear queries, which allow for weighted counting.
In this tutorial, we will cover algorithmic techniques for answering linear queries under the constraints of differential privacy. We will start with basic noise-adding mechanisms. Then we will explain ways to adapt the noise to a desired query workload, both in instance independent ways, using tools from optimization and geometry, and in instance-dependent ways, using algorithms that learn the database as they answer the queries. The tutorial will cover theoretical tools used for answering linear queries and understanding their error properties, as well as empirical results arising in the context of practical use cases.
A counting query on a database returns the number of rows of the database that satisfy a given predicate. Despite being one of the simplest classes of aggregate database queries, they are a workhorse of statistical analysis. They capture many natural queries of practical interest, like marginal and range queries, and are a very useful primitive in building more complicated analyses. Counting queries can be further generalized to linear queries, which allow for weighted counting.
In this tutorial, we will cover algorithmic techniques for answering linear queries under the constraints of differential privacy. We will start with basic noise-adding mechanisms. Then we will explain ways to adapt the noise to a desired query workload, both in instance independent ways, using tools from optimization and geometry, and in instance-dependent ways, using algorithms that learn the database as they answer the queries. The tutorial will cover theoretical tools used for answering linear queries and understanding their error properties, as well as empirical results arising in the context of practical use cases.
Thursday, January 31st, 2019
Formal methods provide tools for helping program developers to design their programs and to guarantee them correct. Different applications have different requirements in term of support that they have to provide to developers and in terms of notions of correctness that they have to provide. In this mini-course I will introduce participants to some of the requirements of differential privacy applications and how formal methods address them.
The mini-course will be divided in three parts. In the first part, I will give an overview of some of the systems, and the problem they address, that have been proposed to support program developers in designing differential privacy applications. In the second part, I will motivate through examples a formal method that we used to verify differentially private applications. Finally, in the third and last part of the mini-course, I will present some recent extensions and alternatives to the formal method presented in the second part and discuss where the community working on these topics is heading.
Formal methods provide tools for helping program developers to design their programs and to guarantee them correct. Different applications have different requirements in term of support that they have to provide to developers and in terms of notions of correctness that they have to provide. In this mini-course I will introduce participants to some of the requirements of differential privacy applications and how formal methods address them.
The mini-course will be divided in three parts. In the first part, I will give an overview of some of the systems, and the problem they address, that have been proposed to support program developers in designing differential privacy applications. In the second part, I will motivate through examples a formal method that we used to verify differentially private applications. Finally, in the third and last part of the mini-course, I will present some recent extensions and alternatives to the formal method presented in the second part and discuss where the community working on these topics is heading.
American census data have myriad applications beyond apportionment of representatives. The census bureau publishes 11 billion (yes, *B*) tables and determines the distribution of billions of dollars in federal funds to states and local communities. The data help to determine where to find skilled workers, open law firms, place schools, day care centers, and assisted living facilities.
Please join us for a discussion of the role of US census data, the role our Simons semester can play in this bold deployment of DP, and the cost of failure – or even the perception of failure – of this endeavor.
Formal methods provide tools for helping program developers to design their programs and to guarantee them correct. Different applications have different requirements in term of support that they have to provide to developers and in terms of notions of correctness that they have to provide. In this mini-course I will introduce participants to some of the requirements of differential privacy applications and how formal methods address them.
The mini-course will be divided in three parts. In the first part, I will give an overview of some of the systems, and the problem they address, that have been proposed to support program developers in designing differential privacy applications. In the second part, I will motivate through examples a formal method that we used to verify differentially private applications. Finally, in the third and last part of the mini-course, I will present some recent extensions and alternatives to the formal method presented in the second part and discuss where the community working on these topics is heading.
In the first hour of this talk, we will discuss principles and challenges toward changing landscapes for data rights and usage. In the second and third hours of the talk, we will explore work in the privacy literature and in economics that asks and attempts to answer questions such as, "What is the value of information? What is the value of privacy?" Many open directions will be discussed.
Friday, February 1st, 2019
In the first hour of this talk, we will discuss principles and challenges toward changing landscapes for data rights and usage. In the second and third hours of the talk, we will explore work in the privacy literature and in economics that asks and attempts to answer questions such as, "What is the value of information? What is the value of privacy?" Many open directions will be discussed.
In the first hour of this talk, we will discuss principles and challenges toward changing landscapes for data rights and usage. In the second and third hours of the talk, we will explore work in the privacy literature and in economics that asks and attempts to answer questions such as, "What is the value of information? What is the value of privacy?" Many open directions will be discussed.