Monday, July 8th, 2019

9:25 am9:30 am

No abstract available.

9:30 am10:15 am
Speaker: James Zou (Stanford University)

As data becomes the fuel driving technological and economic growth, a fundamental challenge is how to fairly quantify the value of data in algorithmic predictions and decisions. For example, Gov. Newsom recently proposed "data dividend" whereby consumers are compensated by companies for the data that they generate. In this work, we develop a principled framework to address data valuation in the context of supervised machine learning. Given a learning algorithm trained on n data points to produce a predictor, we study data Shapley as an equitable metric to quantify the value of each training datum to the predictor performance. Data Shapley uniquely satisfies several natural properties of equitable data valuation. We develop Monte Carlo and gradient-based methods to efficiently estimate data Shapley values in practical settings where complex learning algorithms, including neural networks, are trained on large datasets. In addition to being equitable, our experiments across biomedical, image and synthetic data demonstrate that data Shapley has several other benefits: 1) it gives actionable insights on what types of data benefit or harm the prediction model; 2) weighting training data by Shapley value improves domain adaptation. This is joint work with Amirata Ghorbani.

10:15 am11:00 am
Speaker: Kai-Wei Chang (UCLA)

Natural language processing techniques play important roles in our daily life. Despite these methods being successful in various applications, they run the risk of exploiting and reinforcing the societal biases (e.g. gender bias) that are present in the underlying data. In this talk, I will describe a collection of results that quantify and control implicit societal biases in a wide spectrum of language processing tasks, including word embeddings, coreference resolution, and visual semantic role labeling. These results lead to greater control of NLP systems to be socially responsible and accountable.

11:30 am12:00 pm
Speaker: Rich Zemel (University of Toronto)

We consider the problem of learning representations that achieve group and subgroup fairness with respect to multiple sensitive attributes. Taking inspiration from the disentangled representation learning literature, we propose an algorithm for learning compact representations of datasets that are useful for reconstruction and prediction, but are also flexibly fair, meaning they can be easily modified at test time to achieve sub-group demographic parity with respect to multiple sensitive attributes and their conjunctions. We show empirically that the resulting encoder— which does not require the sensitive attributes for inference—enables the adaptation of a single representation to a variety of fair classification tasks with new target labels and subgroup definitions.

1:30 pm2:15 pm
Speaker: Haiyi Zhu (University of Minnesota)

The increasing accuracy and falling costs of AI have stimulated the increased use of AI technologies in mainstream user-facing applications and services. However, there is a disconnect between mathematically rigorous AI approaches and the human stakeholders’ needs, motivations, and values, as well as organizational and institutional realities, contexts, and constraints; this disconnect is likely to undermine practical initiatives and may sometimes lead to negative societal impacts. In this presentation, I will discuss my research on incorporating human stakeholders’ tacit values and explicit feedback into the creation process of AItechnologies. I will describe a specific project — designing and evaluating intelligent recruitment tools for WikiProjects on Wikipedia — to illustrate my approach. I hope this presentation will contribute to the rich ongoing conversation concerning bridging HCI and AI and using HCI methods to address AI challenges.

2:15 pm3:00 pm
Speaker: Min Kyung Lee (Carnegie Mellon University) 

No abstract available.

3:45 pm4:15 pm
Speaker: Alan Mislove (Northeastern University)

The enormous financial success of online advertising platforms is partially due to the precise targeting features they offer. Although researchers and journalists have found many ways that advertisers can target---or exclude---particular groups of users seeing their ads, comparatively little attention has been paid to the implications of the platform's ad delivery process, comprised of the platform's choices about who should see an ad. It has been hypothesized that this process can "skew" ad delivery in ways that the advertisers do not intend, making some users less likely than others to see particular ads based on their demographic characteristics. In this paper, we demonstrate that such skewed delivery occurs on Facebook, due to market and financial optimization effects as well as the platform's own predictions about the "relevance" of ads to different groups of users. We find that both the advertiser's budget and the content of the ad each significantly contribute to the skew of Facebook's ad delivery. Critically, we observe significant skew in delivery along gender and racial lines for "real" ads for employment and housing opportunities despite neutral targeting parameters. Our results demonstrate previously unknown mechanisms that can lead to potentially discriminatory ad delivery, even when advertisers set their targeting parameters to be highly inclusive. This underscores the need for policymakers and platforms to carefully consider the role of the ad delivery optimization run by ad platforms themselves---and not just the targeting choices of advertisers---in preventing discrimination in digital advertising.

4:15 pm5:00 pm
Speaker: Elisa Celis (Yale University)

Online advertising platforms thrive due to the customizable audiences they offer advertisers. However, recent studies show that ad gets shown in a discriminatory manner with respect to socially salient characteristics such as gender and race crossing ethical and legal boundaries. This can occur either directly, or inadvertently via spillover effects. Towards mitigating this, we propose a constrained optimization framework for revenue optimal single-item ad auctions that allows one to control of the distribution across relevant characteristics of each ad's audience. Building on Myerson's classic work, we first characterize what such optimal auction mechanism look like for a large class of fairness constraints. Finding the parameters of this optimal auction, however, turns out to be a non-convex problem. We show how this non-convex problem can be reformulated as a more structured problem with no saddle points or local-maxima; allowing us to develop a gradient-descent-based algorithm to solve it. Our empirical results on the A1 Yahoo! dataset demonstrate that our algorithm can obtain uniform coverage across different user attributes for each advertiser at a minor loss to the revenue of the platform, and a small change in the total number of advertisements each advertiser shows on the platform.

Tuesday, July 9th, 2019

9:00 am9:45 am
Speaker: Guy Rothblum (Weizmann Institute)

No abstract available.

9:45 am10:30 am
Speaker: Michael Kim (Stanford University)

Many selection procedures involve ordering candidates according to their qualifications.  For example, a university might order applicants according to a perceived probability of graduation within four years, and then select the top 1000 applicants.  In this work, we address the problem of ranking members of a population according to their "probability" of success, based on a training set of *binary* outcome data (e.g., graduated in four years or not).  We show how to obtain rankings that satisfy a number of desirable accuracy and fairness criteria, despite the coarseness of binary outcome data.  As the task of ranking is global -- that is, the rank of every individual depends not only on their own qualifications, but also on every other individuals' qualifications -- ranking is more subtle (and vulnerable to manipulation) than standard prediction tasks.

We develop two hierarchies of definitions of "evidence-based" rankings.  The first hierarchy relies on a semantic notion of *domination-compatibility*:  if the training data suggest that members of a set A are on-average more qualified than the members of B, then a ranking that favors B over A (i.e., where B *dominates* A) is blatantly inconsistent with the data, and likely to be discriminatory.  The definition asks for domination-compatibility, not just for a pair of sets, but rather for every pair of sets from a rich collection C of subpopulations.  The second hierarchy aims at precluding even more general forms of discrimination; this notion of *evidence-consistency* requires that the ranking must be justified on the basis of consistency with the expectations for every set in the collection C.  Somewhat surprisingly, while evidence-consistency is a strictly stronger notion than domination-compatibility when the collection C is predefined, the two notions are equivalent when the collection C may depend on the ranking in question and are closely related to the notion of *multi-calibration* for predictors.

11:00 am11:45 am
Speaker: Sharad Goel (Stanford University)

The nascent field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last several years, three formal definitions of fairness have gained prominence: (1) anti-classification, meaning that protected attributes -- like race, gender, and their proxies -- are not explicitly used to make decisions; (2) classification parity, meaning that common measures of predictive performance (e.g., false positive and false negative rates) are equal across groups defined by the protected attributes; and (3) calibration, meaning that conditional on risk estimates, outcomes are independent of protected attributes. In this talk, I'll show that all three of these fairness definitions suffer from significant statistical limitations. Requiring anti-classification or classification parity can, perversely, harm the very groups they were designed to protect; and calibration, though generally desirable, provides little guarantee that decisions are equitable. In contrast to these formal fairness criteria, I'll argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. Such a strategy, while not universally applicable, often aligns well with policy objectives; notably, this strategy will typically violate both anti-classification and classification parity. In practice, it requires significant effort to construct suitable risk estimates. One must carefully define and measure the targets of prediction to avoid retrenching biases in the data. But, importantly, one cannot generally address these difficulties by requiring that algorithms satisfy popular mathematical formalizations of fairness. By highlighting these challenges in the foundation of fair machine learning, we hope to help researchers and practitioners productively advance the area.

11:45 am12:30 pm
Speaker: Charlie Marx (Haverford College)

Motivated by the need to audit complex and black box models, there has been extensive research on quantifying how data features influence model predictions. Feature influence can be direct (a direct influence on model outcomes) and indirect (model outcomes are influenced via proxy features). Feature influence can also be expressed in aggregate over the training or test data or locally with respect to a single point. Current research has typically focused on one of each of these dimensions. In this paper, we develop disentangled influence audits, a procedure to audit the indirect influence of features. Specifically, we show that disentangled representations provide a mechanism to identify proxy features in the dataset, while allowing an explicit computation of feature influence on either individual outcomes or aggregate-level outcomes. We show through both theory and experiments that disentangled influence audits can both detect proxy features and show, for each individual or in aggregate, which of these proxy features affects the classifier being audited the most. In this respect, our method is more powerful than existing methods for ascertaining feature influence.

2:00 pm2:45 pm
Speaker: Frances Ding (Harvard University) 

As machine learning enters wider use in real-world applications, there is increasing worry about the validity of predictors trained at one institution, eg. a hospital or a state prison system, and applied to many others. Empirically, scientists have found that predictors’ accuracies plummet in new domains. This raises fairness concerns for communities that don’t have the resources or the data available to create their own predictors from scratch. In this talk I will discuss this phenomenon and a model for one source of inter-institution variability, which separates invariant features from institution-specific features. Our approach uses an adversarial neural network to censor institution-specific features from the data, ensuring that prediction is based on invariant features. In contrast to much work in the domain adaptation literature, which assumes access to data from both the training and target domains, we do not require any data from the target domain. When there are invariant features discoverable in the data, our method can improve accuracy on completely unseen populations.

2:45 pm3:30 pm
Speaker: John Miller (UC Berkeley)

Consequential decision-making typically incentivizes individuals to behave strategically, tailoring their behavior to the specifics of the decision rule. A long line of work has therefore sought to counteract strategic behavior by designing more conservative decision boundaries in an effort to increase robustness to the effects of strategic covariate shift. We show that these efforts benefit the institutional decision maker at the expense of the individuals being classified. Introducing a notion of social burden, we prove that any increase in institutional utility necessarily leads to a corresponding increase in social burden. Moreover, we show that the negative externalities of strategic classification can disproportionately harm disadvantaged groups in the population. Our results highlight that strategy-robustness must be weighed against considerations of social welfare and fairness. This is joint work with Smitha Milli, Anca Dragan, and Moritz Hardt.

4:00 pm4:45 pm

No abstract available.

Wednesday, July 10th, 2019

9:00 am9:45 am
Speaker: Bo Cowgill (Columbia Business School)

9:45 am10:30 am
Speaker: Joshua Blumenstock (UC Berkeley)

Many decisions that once were made by humans are now made using algorithms. These algorithms are typically optimized for a single, private objective: Loan decisions are optimized for profit, smart phone apps are optimized for engagement, and news feeds are optimized for clicks. However, these decisions have side effects: irresponsible payday loans, addictive apps, and fake news may maximize private objectives, but have unclear impacts on social welfare. This project develops an approach to welfare-sensitive machine learning, which provides a formal framework for weighing the social welfare impact of an algorithm alongside traditional, private optimization criteria. We describe and evaluate different strategies for measuring and predicting these welfare impacts. In partnership with a large financial institution in Kenya, we stress-test this approach with real lending data, and characterize the empirical tradeoff between private (profit) and public (welfare) objectives in a high-stakes environment.

11:00 am11:45 am
Speaker: Aaron Roth (University of Pennsylvania) 

We revisit the notion of individual fairness first proposed by Dwork et al. [2012], which asks that "similar individuals should be treated similarly". A primary difficulty with this definition is that it assumes a completely specified fairness metric for the task at hand. In contrast, we consider a framework for fairness elicitation, in which fairness is indirectly specified only via a sample of pairs of individuals who should be treated (approximately) equally on the task. We make no assumption that these pairs are consistent with any metric. We provide a provably convergent oracle-efficient algorithm for minimizing error subject to the fairness constraints, and prove generalization theorems for both accuracy and fairness. Since the constrained pairs could be elicited either from a panel of judges, or from particular individuals, our framework provides a means for algorithmically enforcing subjective notions of fairness. Because we perform this as a one-shot task, rather than first trying to learn a metric consistent with the elicited judgements, we avoid having to make any assumptions on the form of the elicited judgements to give generalization bounds. We report on preliminary findings of a behavioral study of subjective fairness using human-subject fairness constraints elicited on the COMPAS criminal recidivism dataset. Joint work with Christopher Jung, Michael Kearns, Seth Neel, Logan Stapleton, and Zhiwei Steven Wu

11:45 am12:30 pm
Speaker: Swati Gupta (Georgia Tech)

Since many critical decisions impacting human lives are increasingly being made by algorithms, it is important to ensure that the treatment of individuals under such algorithms is demonstrably fair under reasonable notions of fairness. One compelling notion proposed in the literature is that of individual fairness (IF), which advocates that similar individuals should be treated similarly (Dwork et al. 2012). Originally proposed for offline decisions, this notion does not, however, account for temporal considerations relevant for online decision-making. In this paper, we extend the notion of IF to account for the time at which a decision is made, in settings where there exists a notion of conduciveness of decisions as perceived by the affected individuals. We introduce two definitions: (i) fairness-across-time (FT) and (ii) fairness-in-hindsight (FH). FT is the simplest temporal extension of IF where treatment of individuals is required to be individually fair relative to the past as well as future, while in FH, we require a one-sided notion of individual fairness that is defined relative to only the past decisions. We show that these two definitions can have drastically different implications in the setting where the principal needs to learn the utility model. Linear regret relative to optimal individually fair decisions is inevitable under FT for non-trivial examples. On the other hand, we design a new algorithm: Cautious Fair Exploration (CaFE), which satisfies FH and achieves sub-linear regret guarantees for a broad range of settings. We characterize lower bounds showing that these guarantees are order-optimal in the worst case. FH can thus be embedded as a primary safeguard against unfair discrimination in algorithmic deployments, without hindering the ability to take good decisions in the long-run.

2:00 pm2:45 pm
Speaker: Alex Frankel (Chicago Booth)

We first study a model of signaling in which agents are heterogeneous on two dimensions. An agent’s natural action is the action taken in the absence of signaling concerns; in the context of school testing, think of a student's underlying grasp of the material. Her gaming ability parameterizes the cost of increasing the action; think of skill at test prep. Equilibrium behavior muddles information across dimensions. As incentives to take higher actions increase—due to higher stakes or more manipulable signaling technology—more information is revealed about gaming ability, and less about natural actions. We then discuss how information about natural actions can be improved, either through "leveling the playing field" or by committing to "flatten" the allocation rule as a function of the action.

2:45 pm3:30 pm
Speaker: Yahav Bechavod (Hebrew University)

We study an online classification problem with partial feedback in which individuals arrive one at a time from a fixed but unknown distribution, and must be classified as positive or negative. Our algorithm only observes the true label of an individual if they are given a positive classification. This setting captures many classification problems for which fairness is a concern: for example, in criminal recidivism prediction, recidivism is only observed if the inmate is released; in lending applications, loan repayment is only observed if the loan is granted. We require that our algorithms satisfy common statistical fairness constraints (such as equalizing false positive or negative rates --- introduced as "equal opportunity" in Hardt et al. (2016)) at every round, with respect to the underlying distribution. We give upper and lower bounds characterizing the cost of this constraint in terms of the regret rate (and show that it is mild), and give an oracle efficient algorithm that achieves the upper bound.

Joint work with Katrina Ligett, Aaron Roth, Bo Waggoner and Steven Wu.

4:00 pm4:45 pm
Speaker: Konstantin Kakaes (Simons Institute)

No abstract available.