Monday, October 12th, 2015

10:00 am10:45 am

Many games of social significance are played in a networked context.  In these settings, agents often exhibit simple behaviors, shaped by local preferences and social norms.  The interplay of these behaviors and the underlying network give rise to emergent structures with global impact.  In this talk, we explore the impact of networked behavior on social capital, segregation, and learning.

First we study the emergence of social capital in dynamic, anonymous social networks, such as online communities.  We find that, despite the lack of punitive strategies, (partial) cooperation is sustainable at an intuitive and simple equilibrium as cooperation allows an individual to interact with an increasing number of other cooperators, resulting in the formation of valuable social capital. 

Next we examine the emergence of segregation in geographical networks.  In 1969, Schelling introduced a model of racial segregation in which individuals move out of neighborhoods where their ethnicity constitutes a minority and suggested that this local behavior can cause global segregation effects. Our rigorous analysis shows that, in contrast to prior interpretations, the outcome exhibits local but not global segregation.

Finally, we study learning outcomes in social networks. Individuals with independent opinions asynchronously update their declared opinion to match the majority report of their neighbors.  We show that the population will converge to the majority opinion with high probability if the underlying network is large, sparse, and expansive, properties reflected by realistic social networks.

Based on joint works with Christie Brandt, Michal Feldman, Gautam Kamath, Robert Kleinberg, Brendan Lucier, Brian Rogers and Matt Weinberg.

11:15 am11:45 am

A fundamental assumption underlying much of mechanism design is that buyers know their values for the products they are interested in. We consider settings where agents receive signals related to their values from a joint distribution, and their estimated values are functions of their own as well as others' signals. We consider revenue maximization in such settings and show that a variant of the VCG mechanism with admission control gives a constant approximation to the optimal expected revenue. Our results do not require any assumptions on the signal distributions, however, they require the value functions to satisfy a standard single-crossing property and a concavity-type condition.

11:45 am12:15 pm

We introduce a model of dynamic matching in networked markets, where agents arrive and depart stochastically, and the composition of the trade network depends endogenously on the matching algorithm. We show that if the planner can identify agents who are about to depart, then waiting to thicken the market is highly valuable, and if the planner cannot identify such agents, then matching agents greedily is close to optimal. We characterize the optimal timing policy as a function of waiting costs and network sparsity. The planner’s decision problem in our model involves a combinatorially complex state space. However, we show that simple local algorithms that choose the right time to match agents, but do not exploit the global network structure, can perform close to complex optimal algorithms. Finally, we consider a setting where agents have private information about their departure times, and design a continuous-time dynamic mechanism to elicit this information.

2:00 pm2:45 pm

We discuss a novel approach for reducing a k-item n-bidder auction with additive valuation to k-item 1-bidder auctions, leading to applications such as constant factor approximation algorithms, and Bayesian versus dominant-strategy ratio bounds.  We also discuss auctions that have non-additive valuations.

3:15 pm3:45 pm

We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value for a set of items is additive. The seller aims to maximize her revenue. It is known that an optimal mechanism in this setting may be quite complex, requiring randomization and menus of infinite size. Hart and Nisan have initiated a study of two very simple pricing schemes for this setting: item pricing, in which each item is priced at its monopoly reserve; and bundle pricing, in which the entire set of items is priced and sold as one bundle. Hart and Nisan have shown that neither scheme can guarantee more than a vanishingly small fraction of the optimal revenue. In sharp contrast, we show that for any distributions, the better of item and bundle pricing yields a 6-approximation to the optimal revenue. We further extend this result to settings where the buyer has a valuation function that is just subadditive, but "independent across items," showing that the better of item and bundle pricing still yields a constant-factor approximation.

Based on joint works with Moshe Babaioff, Nicole Immorlica and Brendan Lucier, and Aviad Rubinstein.

3:45 pm4:15 pm

In his seminal paper, Myerson [1981] provides a revenue-optimal auction for a seller who is looking to sell a single item to multiple bidders. Extending this auction to simultaneously selling multiple heterogeneous items has been one of the central problems in Mathematical Economics. We provide such an extension that is also computationally efficient. Our solution proposes a novel framework for mechanism design by reducing mechanism design problems (where one optimizes an objective function on "rational inputs") to algorithm design problems (where one optimizes an objective function on "honest inputs"). Our reduction is generic and provides a framework for many other mechanism design problems, e.g. makespan minimization with strategic machines and fair resource allocation.

Tuesday, October 13th, 2015

10:00 am10:45 am

When goods are substitutes, auctions in which price clocks move monotonically can achieve efficient outcomes with computations that are (typically) pseudo-polynomial. However, while in such real economic problems as the FCC incentive auction, goods seem intuitively to be approximate substitutes, many steps in the clock auction computations are NP-complete. We introduce a metric on problems to describe nearness to substitutes and show that a properly devised clock auction can achieve near-efficiency when the distance is small.

11:15 am11:45 am

We study deferred-acceptance (DA) clock auctions offer for single-minded bidders, defined as auctions that make price offers to bidders that grow less attractive over time and let bidders quit at any time, accepting all bidders that are still active when the auction stops. Such auctions are characterized as essentially the only dominant-strategy mechanisms that preserve winners’ unconditional privacy (i.e., reveal minimal information about winners’ values needed to prove that they should be winning). These auctions are also obviously strategy-proof and (weakly) group strategy-proof. If restricted to implement value-maximizing allocations, and/or to use myopic bid rejection rules, DA auctions could only implement allocation rules in which bidders are substitutes. However, DA auctions with heuristic bid reduction rules can also be useful for some environments with complementarities, particularly computationally challenging ones such as the US auction to repurchase television broadcast rights. DA auctions can balance a number of goals, including efficiency, revenue, and a budget constraint.

11:45 am12:15 pm

In mechanism design, strategy-proofness is often said to be desirable because it makes it easy for agents to decide what to do. However, some strategy-proof mechanisms are easier to understand than others.  To address this problem, I propose a new solution concept: Obviously Strategy-Proof (OSP).  Using a formal cognitive model, I show that a mechanism is OSP if each agent's dominant strategy can be verified without contingent reasoning. I show that a choice rule is OSP-implementable if it can be carried out by a social planner under a particular regime of partial commitment. Finally, I characterize the set of OSP mechanisms in a canonical setting, for that encompasses private-value auctions with unit demand and binary public good problems.

2:00 pm2:45 pm

The revelation principle suggest that a theoretician searching for an optimal mechanism need only consider ones with truth telling as an equilibrium.  This optimal mechanism may not be practical, and corresponding non-revelation mechanisms are generally complex and highly dependent on details.   In this talk, I will give a theory for the design of simple non-revelation mechanisms (e.g., with "first-price” payments).  These mechanisms (a) will have unique and easy to find Bayes-Nash equilibria, (b) are amenable to inference, (c) are easy to optimize over, and (d) approximate the optimal mechanism in general single-dimensional environments.  These result come, respectively, from Chawla and Hartline (2013); Chawla, Hartline, and Nekipelov (2014, 2015); and Hartline and Taggart (2015) which will be surveyed.

2:45 pm3:15 pm

The traditional econometrics approach for inferring properties of strategic interactions that are not fully observable in the data, heavily relies on the assumption that the observed strategic behavior has settled at an equilibrium. This assumption is not robust in complex economic environments such as online markets where players are typically unaware of all the parameters of the game in which they are participating, but rather only learn their utility after taking an action. Behavioral models from online learning theory have recently emerged as an attractive alternative to the equilibrium assumption and have been extensively analyzed from a theoretical standpoint in the algorithmic game theory literature over the past decade. In this talk I will present recent work, in which we take a learning agent approach to econometrics, i.e. infer properties of the game, such as private valuations or efficiency of allocation, by only assuming that the observed repeated behavior is the outcome of a no-regret learning algorithm, rather than a static equilibrium. I will also present some empirical results from applying our methods to datasets from Microsoft’s sponsored search auction system.

Wednesday, October 14th, 2015

10:00 am10:45 am

Efficient market equilibria exist in many settings, assuming that individuals behave as price-takers.  If the participants instead behave strategically, it is often still possible to obtain welfare bounds using the smoothness framework.  In this talk I will draw connections between worst-case smoothness bounds and price-taking behavior.  I will present new bounds and mechanisms that exploit this connection.

I will first describe an extension of the smoothness framework that provides welfare bounds that improve as markets grow large, sometimes to full efficiency.  The analysis considers agents who (perhaps delusionally) imagine that they have no market power and must behave as price-takers.  Our welfare bounds depend on how well this delusion approximates the true game.

In the opposite direction, I will show how to use insights from smoothness to design posted-price mechanisms for welfare.  Typically, a smooth mechanism is prior-independent and comes with a welfare guarantee at Bayes-Nash equilibrium.  Our results demonstrate that by sacrificing prior-independence, one can obtain similar welfare with an ex post incentive compatible mechanism, for a variety of auction problems. 

11:45 am12:15 pm

I consider a setting where multiple sellers of an identical, indivisible good, compete for buyers by simultaneously announcing auctions. I partially characterize the equilibria among sellers in this setting. I show that the oligopoly equilibrium may be in simple (second-price) mechanisms, where the monopolists optimal auction may have been complex (ironing). 

2:00 pm2:45 pm

Players have uncertainty over both an external random variable -- such as a security price -- and over each other's beliefs. We study agents’ subjective expectations of the weighted average of others' subjective expectations . . . of the weighted average of others' subjective expectations of the external random variable. The weights involved can be viewed as a network. By relating these iterated average expectations to a Markov chain, we characterize their limit properties, generalizing prior results on games with common priors and on complete-information network games. We then apply the conclusions to study coordination games, over-the-counter financial markets, the possibility of rationalizable trade, and the robustness of equilibrium.

 This is joint work with Stephen Morris (Princeton).

3:15 pm3:45 pm

Exclusive social groups are ones in which the group members decide whether or not to admit a candidate to the group. Examples of exclusive social groups include academic departments and fraternal organizations. We introduce an analytic framework for studying the dynamics of exclusive social groups. In our model, every group member is characterized by his opinion, which is represented as a point on the real line. The group evolves in discrete time steps through a voting process carried out by the group's members. Due to homophily, each member votes for the candidate who is more similar to him (i.e., closer to him on the line). An admission rule is then applied to determine which candidate, if any, is admitted. We consider several natural admission rules including majority and consensus.

We ask: how do different admission rules affect the composition of the group in the long term?

We study both growing groups (where new members join old ones) and fixed-size groups (where new members replace those who quit). Our analysis reveals intriguing phenomena and phase transitions, some of which are quite counter-intuitive.

This is a joint work with Noga Alon, Michal Feldman, Yishay Mansour and Moshe Tennenholtz.

3:45 pm4:15 pm

YouTube competes with Hollywood as an entertainment channel, and also supplements Hollywood by acting as a distribution mechanism. Twitter has a similar relationship to news media, and Coursera to Universities. But there are no online alternatives for making democratic decisions at large scale as a society. In this talk, we will discuss some of the challenges in and approaches for solving this broad problem. We will first describe an opinion formation process that leads to polarization. We will then describe two algorithmic approaches towards large scale decision making that we are exploring.

a) Triadic consensus: Here, we divide individuals into small groups (say groups of three) and ask them to come to consensus; the results of the triadic deliberations in each round form the input to the next round. We show that this method is efficient and strategy-proof in fairly general settings, whereas no pair-wise deliberation process can have the same properties.

b) Knapsack voting and participatory budgeting: All budget problems are knapsack problems at their heart, since the goal is to pack the largest amount of societal value into a budget. This naturally leads to "knapsack voting" where each voter solves a knapsack problem, or comparison-based voting where each voter compares pairs of projects in terms of benefit-per-dollar. We analyze natural aggregation algorithms for these mechanisms, and show that knapsack voting is weakly strategy-proof. We will also describe our experience with helping implement participatory budgeting in close to a dozen cities and municipalities.

This is joint work with Tanja Aitamurto, Pranav Dandekar, Anilesh Krishnaswamy, David Lee, and Sukolsak Sakshuwong.

Thursday, October 15th, 2015

10:00 am10:45 am

We study bidding behavior in sealed bid first price auctions, and obtain tight lower bounds on winning bid distributions for arbitrary symmetric distributions of values allowing for any information structure. The information structure and strategy profile attaining this bound have bidders uncertain about their value, uncertain about whether they will win, indifferent to all higher bids in the support of the bid distribution, and without binding constraints to bid lower. The bidding bounds imply tight lower bounds on revenue and upper bounds on bidder surplus.

The only upper bound on bids on winning bids is the distribution of highest values. Our results extend if we restrict attention to information structures where bidders know their own values. In this case, we give a non-trivial upper bound on bidder surplus and lower bound on revenue. 

11:15 am11:45 am

We consider the complexity of finding a _correlated equilibrium_ in an n-player game, in a model that allows the algorithm to make queries for players’ payoffs at pure strategy profiles. Randomized regret-based dynamics are known to yield an approximate correlated equilibrium quickly, namely, in time that is polynomial in the number of players n. Here we show that _both_ _randomization_ and _approximation_ are necessary: no efficient deterministic algorithm can reach even an approximate correlated equilibrium, and no efficient randomized algorithm can reach an exact correlated equilibrium. Joint work with Noam Nisan. 

11:45 am12:15 pm

Border’s theorem characterizes the possible (interim) allocation probabilities in a single item auction.  It has received much interest lately in Algorithmic Mechanism Design as it allows optimization in Mechanism Design using polynomial-size linear programs rather than the natural exponential-size ones.  Known Generalizations of Border’s theorem beyond the simple case of single item auctions are either very limited or are only approximate. This talk will explain why significant extensions of Border’s theorem are impossible, assuming standard Computational Complexity assumption.  

Joint work with Parikshit Gopalan and Tim Roughgarden.

2:00 pm2:30 pm

We provide a duality-based framework for revenue maximization in a multiple-good monopoly. Our framework shows that every optimal mechanism has a certificate of optimality, taking the form of an optimal transportation map between measures. Using our framework, we prove that grand-bundling mechanisms are optimal if and only if two stochastic dominance conditions hold between specific measures induced by the buyer’s type distribution. This result strengthens several results in the literature, where only sufficient conditions for grand-bundling optimality have been provided. As a corollary of our tight characterization of grand-bundling optimality, we show that the optimal mechanism for n independent uniform items each supported on [c; c + 1] is a grand-bundling mechanism, as long as c is sufficiently large, extending Pavlov’s result for 2 items [Pavlov 2011]. Surprisingly, our characterization also implies that, for all c and for all sufficiently large n, the optimal mechanism for n independent uniform items supported on [c; c + 1] is not a grand bundling mechanism. The necessary and sufficient condition for grand bundling optimality is a special case of our more general characterization result that provides necessary and sufficient conditions for the optimality of an arbitrary mechanism (with a finite menu size) for an arbitrary type distribution.

2:30 pm3:00 pm

We consider an agent who has multiple dimensions of private  information, and preferences that are additively separable across the various dimensions.  (A canonical example is selling multiple goods to an agent with unknown values.)  The standard revenue-maximization problem in this setting is difficult.  Instead we consider a robust version of the problem: the principal knows the marginal distribution of each dimension of the agent's type, but does not know the joint distribution.  Any mechanism is evaluated by its worst-case expected profit, over all joint distributions consistent with the known marginals.  We show that the optimal mechanism is simply to separate across the dimensions, and use the optimum within each dimension.  This requires no assumptions on the structure of preferences within each dimension.

Friday, October 16th, 2015

10:00 am10:45 am

A monopolist sells informative experiments to heterogeneous buyers who face a decision problem. Buyers differ in their prior information, and hence in their willingness to pay for additional signals. The monopolist can profitably offer a menu of experiments. Even under costless acquisition and degrading of information, the optimal menu is quite coarse. The seller offers at most two experiments, and we derive conditions under which flat pricing (one experiment) or discriminatory pricing (two experiments) is optimal.

11:15 am11:45 am

We explain how to use concepts from learning theory to make optimal auction theory more operational, replacing the “common prior” assumption with a data-driven approach.  For example, we prove that in arbitrary single-parameter settings, one can learn an auction with expected revenue arbitrarily close to optimal from a polynomial number of samples from an unknown valuation distribution.

11:45 am12:15 pm

In discrete allocation problems, Walrasian equilibrium prices have a remarkable property: they allow each agent in the market to purchase a bundle of goods that he independently judges to be the most desirable, while guaranteeing that the jointly induced allocation will globally be social welfare maximizing. However, this otherwise clean story has two caveats:

 -- First, the prices themselves may induce a large number of indifferences among agents that need to be resolved in a coordinated manner. Hence, the prices themselves are not alone sufficient to coordinate the market. In fact, it is not hard to see that the minimal Walrasian equilibrium prices necessarily -always- induce indifferences, for any set of bidder valuations, so we cannot simply appeal to "genericity" to eliminate indifferences.

 -- Second, although we know natural procedures which converge to Walrasian equilibrium prices when used on a fixed population, in practice, we observe and interact with prices that were arrived at independently of our own participation in a tâtonnement process. We expect that these prices therefore are not perfect Walrasian equilibrium prices tailored exactly to the individuals in the market, but rather, that they result from some kind of "distributional" information about the market. This exacerbates the issue of coordination.

We give results of two sorts. First, we show a genericity condition under which the (exact) minimal Walrasian equilibrium prices in a commodity market induce allocations which result in vanishing overdemand for any tie breaking rule that buyers may use to resolve their indifferences. Second, we use techniques from learning theory to argue that the overdemand and welfare induced by a price vector converges to its expectation uniformly over the class of all price vectors, with sample complexity only linear in the number of goods in the market (without placing any assumption on the form of the valuation functions of the buyers).

Combining these two results implies that the exact Walrasian equilibrium prices computed in a commodity market (under a mild genericity condition) are guaranteed to induce both low overdemand and high welfare when used in a new market, in which agents are sampled independently from the same distribution, whenever the number of agents is larger than the number of commodities in the market.

Based on joint work and discussion with Justin Hsu, Jamie Morgenstern, Ryan Rogers, and Rakesh Vohra.

 

2:00 pm2:30 pm

We consider a multi-dimensional screening problem of selling a product with multiple quality levels and design virtual value functions to derive conditions that imply optimality of only selling highest quality. A challenge of designing virtual values for multi-dimensional agents is that a mechanism that pointwise optimizes virtual values resulting from a general application of integration by parts is not incentive compatible, and no general methodology is known for selecting the right paths for integration by parts.  We resolve this issue by first uniquely solving for paths that satisfy certain necessary conditions that the pointwise optimality of the mechanism imposes on virtual values, and then identifying distributions that ensure the resulting virtual surplus is indeed pointwise optimized by the mechanism. Our method of solving for virtual values is general, and as a second application we use it to derive conditions of optimality for selling only the grand bundle of items to an agent with additive preferences.

2:30 pm3:00 pm

We give an explanation to the relative lack of power of computationally efficient truthful mechanisms: only "simple" social choice functions can be implemented by a truthful mechanism with low communication complexity. This is done by showing that the well-known taxation principle entails a severe communication bottleneck.

Specifically, the taxation principle asserts that any truthful mechanism can be interpreted as follows: every player is presented with a menu that consists of a price for each bundle (the prices depend only on the valuations of the other players). The player is allocated a bundle that maximizes his profit according to this menu. Define the menu complexity of a truthful mechanism M to be the maximal number of menus that may be presented to some player. We show that under some mild conditions low communication complexity implies low menu complexity.

Our approach yields several promising implications, including strengthening the solution concept with low communication overhead, fast computation of prices, and hardness of approximation by computationally efficient truthful mechanisms. We will discuss some of them if time permits.