Abstracts

Monday, March 26th, 2018

9:30 am10:15 am

We consider a distribution grid used to charge electric vehicles subject to voltage drop and various other constraints. We model this as a class of resource-sharing networks known as bandwidth-sharing networks in the communication network literature. Such networks have proved themselves to be an effective flow-level model of data traffic in wired and wireless networks. We focus on resource sharing networks that are driven by a class of greedy control rules that can be implemented in a decentralized fashion. For a large number of such control rules, we can characterize the performance of the system, subject to voltage stability constraints, by a fluid approximation. This leads to a set of dynamic equations that take into account the stochastic behavior of cars. We show that the invariant point of these equations is unique and can be computed by solving a specific ACOPF problem, which admits an exact convex relaxation. For the class of weighted proportional fairness control, we show additional appealing properties under the linearized Distflow model, such as fairness, and a product form property of the stochastic model. In addition, we show how the task of optimizing weights can be formulated as resource allocation problems. A preprint is available as  https://arxiv.org/abs/1711.05561

10:45 am11:15 am

How to conduct near real-time analytics of streaming data in the smart grid? This talk offers a dynamic systems approach to utilizing emerging data for improved monitoring of the grid. The first example of the talk presents how to leverage the underlying spatio-temporal correlations of synchrophasors for early anomaly (e.g., subsynchronous oscillations) detection, localization, and data quality outlier detection. The second example presents a dynamic systems approach to modeling price responsive demand in real-time energy markets. The underlying theme of the work suggests the importance of integrating data with dynamic physics-based analytics in the context of electric energy systems.

11:15 am11:45 am

As the penetration of renewable increases and conventional generators retire, the reliability concerns on balancing real-time demand and supply motivate the utilization of demand-side resources. However, residential demands which consist the most significant share of electricity demand are still underutilized in grid operation. Although both industry and academia are conducting pilot studies and developing theories respectively, they have not been serendipitously connected. This work performs a case study of an actual pilot study and its dataset. Specifically, we train learning models to model users’ behavior to enhance DR performance for peak load reduction, and we also develop a combinational multi-armed bandit (MAB) algorithm to exploit demands while exploring stochastic models of these demands. The MAB problem is different from the classic ones in the sense that the objective function is non-monotone since the goal is to maximize reliability, that is, to minimize the difference between the total load reduction and a target value. Thus we propose a learning algorithm and prove that the proposed algorithm achieves O(log T) regret given a static target, and o(T) regret when the target is time-varying. Our preliminary results show that the application of modern learning techniques can help improve the performance of DR pilots in practice. We hope that this work can bridge the gap between DR pilots and theoretical analysis, and eventually unlocking the potential residential demands in grid operation.

This work is joint with Yingying Li (Harvard University), Qinran Hu (Harvard University)  Jun Shimada (ThinkEco Inc), Alison Su (ThinkEco Inc)

11:45 am12:15 pm

Cyber-physical systems (CPS) integrate computation with physical processes. Examples of CPS include modern automobiles, avionics, medical devices, robots, power systems, sensor networks, and many more. Many of these systems operate in uncertain, evolving, or partially unknown environments, and even so must provide strong assurances of safety and performance. In this talk, I will discuss algorithmic problems arising in the real-time verification and control of cyber-physical systems. In particular, I will talk about the use of temporal logic for specifying desired and undesired behavior of CPS, and the use of online algorithms and optimization for verification and control of temporal logic properties of CPS.

1:45 pm2:15 pm

In this talk, I will present the latest results on my group’s decade long study of social learning and opinion dynamics. We study the behavioral foundations of non-Bayesian models of learning over social networks and present a taxonomy of conditions for information aggregation in a very general framework. As our main behavioral assumption, we postulate that agents follow social learning rules that satisfy “imperfect recall,” according to which they treat the current beliefs of their neighbors as sufficient statistics for the entire history of their observations. We augment this assumption with various restrictions on how agents process the information provided by their neighbors and obtain representation theorems for the corresponding learning rules (including the popular canonical consensus model of DeGroot). We then obtain general long-run learning results that are not tied to the learning rules’ specific functional forms, thus identifying the fundamental forces that lead to learning, non-learning, and mislearning in social networks.  We will also present formal results on complexities of fully Bayesian information aggregation and learning. If time permits, I will also present our new PhD program in Social and Engineering Systems, an interdisciplinary PhD program that combines information and decision theory and social sciences to address complex societal problems.

Joint work with Alireza Tahbaz-Salehi (Kellogg School, Northwestern U), Pooya Molavi (MIT Economics), Amin Rahimian (MIT IDSS).
Paper: http://economics.mit.edu/files/14542 (forthcoming, Econometrica, 2018)

2:15 pm2:45 pm

We consider the problem of separating error messages generated in large distributed data center networks into error events. In such networks, each error event leads to a stream of messages generated by all network components affected by the event. These messages are stored in a giant message log, with no information about the associated events. We study the unsupervised learning problem of identifying the signatures of the events that generated these messages; here, the signature of an error event refers to the probability distribution of messages generated by the event. We design a low-complexity algorithm for this purpose, and demonstrate its scalability on a real dataset consisting of 97 million messages collected over a period of 15 days, from a distributed data center network which supports the operations of a large wireless service provider.

2:45 pm3:15 pm

We consider the problem of designing `Learning to Control' algorithms for stochastic systems when the model parameters are unknown. Two models are considered: Markov Decision Processes and Linear Stochastic systems. A Thompson-sampling based regret-minimization learning framework is developed for trading off exploration v. exploitation. Sampling from a posterior distribution on unknown parameters at regular intervals provides the necessary exploration for learning. This obviates the need for expensive computation for exploration, and makes the algorithm suitable for real-time decision-making and learning. The key is designing a suitable exploration schedule. The proposed algorithms achieve O(sqrt{T}) expected regret which is order-optimal. It differs from classical adaptive control algorithms in its focus on non-asymptotic regret optimality as opposed to asymptotic stability. Numerical evaluation suggests robust performance of the algorithms.

3:45 pm4:15 pm

The success of cancer treatment depends heavily on early diagnosis; unfortunately, modern detection still relies on inaccurate or invasive procedures, and so early detection remains an open problem. We propose a simple, accurate proteomic blood test (a ‘liquid biopsy’) for cancer detection. We conduct experiments on cryogenically preserved plasma from healthy patients from a longitudinal study that were later diagnosed with cancer. These experiments demonstrate that our test achieves diagnostically significant sensitivities and specificities for many types of cancers in their earliest stages using only plasma.

Our biopsy relies on multiplexing observations across thousands of blood proteins and multiple reagents. This yields sparse matrix-valued observations for each patient. We show that `de-noising' this sparse raw data is critical to achieving acceptable diagnostic accuracy levels, and further that traditional approaches to de-noising (algorithms such as PCA) fail. Instead, we rely on a new approach to noisy tensor recovery, we dub `slice learning’. Slice learning admits near-optimal recovery guarantees that in an important regime represent an order improvement over the best available results for tensor recovery. Applied to the design of our biopsy it yields sensitivity and specificity levels that are potentially sufficient for a clinical diagnostic tool.

Tuesday, March 27th, 2018

9:30 am10:15 am

For the first time in history, more than half of the world’s population lives in urban areas; in just a few more decades, the world's population will exceed 9 billion, 70 percent of whom will live in cities. Enabling those cities to deliver services effectively, efficiently, and sustainably while keeping their citizens safe, healthy, prosperous, and well-informed will be among the most important undertakings in this century. I will review how we have established a center for urban science with a focus on bringing informatics to the study and operation of urban systems. I will touch on the rational, the structure, and the substance of the Center’s work and the ways in which it will enrich NYC and contribute to global issues. Taxis, lights, phones, and buildings will all enter into the discussion in novel ways.

10:45 am11:15 am

Intersections are hazardous places. Threats arise from interactions among pedestrians, bicycles and vehicles; more complicated vehicle trajectories in the absence of lane markings; phases that prevent knowing who has the right of way; invisible vehicle approaches; vehicle obstructions; and illegal move- ments. These challenges are not fully addressed by the “road diet” and road redesign prescribed in Vision Zero plans. Nor will they be completely overcome by autonomous vehicles with their many on-board sensors and tireless attention to sensor readings. Accidents can also occur because drivers, bicyclists and pedestrians do not have the information they need to avoid wrong decisions. In these cases, the missing information can be calculated and communicated by an intelligent intersection. The information gives the current full signal phase, an estimate of the time when the phase will change, the occupancy of the blind spots of the driver or autonomous vehicle, and detection of red-light violators. The talk develops a design of the intelligent intersection, motivated by the analysis of an accident at an intersection in Tempe, AZ, between an automated Uber Volvo and a manual Honda CRV. The intelligent intersection functions as a ‘protected intersection’ using I2V communication.

11:15 am11:45 am

We propose and discuss various a decentralized traffic signal control policy for urban road networks. We discuss issues associated with transferring control policies traditionally used in communications systems to the vehicular traffic. We discuss counter-examples previously unknown to both literatures. Further, simulations are been conducted to compare distributed control policies in various traffic and network scenarios.

11:45 am12:15 pm

There is a data type, that we may call "hierarchical log data", which arises across a wide range of societal networks. It consists of event records in a complex structured activity by an agent. Examples: a commuter makes trips, and for each trip there are tap-in and tap-out events recorded; a user sends requests to a web service, and this triggers a cascade of internal requests in a data center each of which is logged; a person engages in a sequence of purchases in order to complete a project and the credit card purchases are all logged.

Hierarchical log data is central to the understanding of societal networks. Yet it has received very little attention, neither in the database community, nor in data science, nor machine learning. I will describe the challenges, share a dataset, and suggest and some ways forward.

1:45 pm2:15 pm

Very large amounts of renewable electricity generation, combined with a large penetration of plug-in electric vehicles, may cause considerable stress to electrical grids and to the system of spinning reserves. This problem can be solved by controlling the huge number of electrical resources that are located in distribution grids, such as thermal loads, stationary batteries, charging stations and curtailable power generators. However, this poses a number of new challenges in terms of online computation, scalability and reliability. In this talk we discuss how these challenges are solved by COMMELEC, a system of real-time software agents developed at EPFL and deployed in several grids. We discuss how real-time load-flows can be computed with guaranteed convergence, how uncertainty about power injections impacts controllability of the grid state, and how active replication of real-time controllers can be supported.

2:15 pm2:45 pm

The problem is to optimize the appointments of customers with a single server to minimize a combination of idle time, waiting time, and overtime.  We consider static appointments (come at 10:30am) and dynamic appointments (I will warn you one hour before you should come).  The processing times of the customers are random and have unknown distributions. We design learning algorithms for situations where similar processing tasks repeat over time.

2:45 pm3:15 pm

Recently the FCC released 3.85GHz of licensed spectrum and 7GHz of unlicensed spectrum in the frequency band above 24GHz. This massive spectrum can potentially unleash enormous information flows. Transmissions in this millimeter wave band are directional and not subject to typical omni-directional interference, but are subject to absorption.

So motivated, we consider unreliable multi-hop wireless networks carrying many flows, where each flow has an end-to-end deadline. Define the "timely-throughput" vector as the throughput vector of packets that meet their respective end-to-end deadlines.

Several fundamental questions are of interest. What are the maximal timely-throughput vectors that an unreliable multi-hop wireless networks can support? What are the optimal scheduling policies? Since optimal scheduling can require knowledge of instantaneous network state, which in turn requires communication with zero latencies, giving rise to a chicken-and-egg conundrum, is there any hope that fully distributed policies not requiring complete network state can be optimal?

We introduce a price-based scheduling policy for scheduling multiple flows with hard end-to-end deadlines, over a network of unreliable links, with average power constraints at nodes. This policy is easily computed and fully distributed, and it optimizes the throughput vector subject to the hard end-to-end deadlines. We also characterize the set of all maximal timely-throughput vectors.
[Joint work with Rahul Singh]

3:45 pm4:15 pm

The use of simulation as a computational tool arises in many real-time decision-making settings. Such problems rarely lend themselves to steady-state formulations. As a consequence, the computations typically involve “transient” calculations and the answers therefore depend heavily on how one chooses to initialize the simulation. Typically, the initialization will need to be aligned with the current “state” of the real-world system under consideration. However, in most settings, the state variables underlying the simulation will differ significantly from the real-world state that is observable. In this talk, we will examine this data assimilation issue, discuss some of the statistical and computational challenges, and introduce several relevant algorithms that arise in this context.

Wednesday, March 28th, 2018

9:30 am10:15 am

Selfish behavior can often lead to suboptimal outcome for all participants. This is especially true in dynamically changing environments where the game or the set of the participants can change at any time without even the players realizing it.  Over the last decade we have developed good understanding how to quantify the impact of strategic user behavior on overall performance via studying stable Nash equilibria of the games.  In this talk we will consider the quality of outcomes in games when the population of players is dynamically changing, and where participants have to adapt to the dynamic environment. We show that in large classes of games (including traffic routing), if players use a form of learning that helps them to adapt to the changing environment, this guarantees high social welfare, even under very frequent changes. Joint work with Thodoris Lykouris and Vasilis Syrgkanis.

10:45 am11:15 am

Machine Learning and Data Science (ML) is starting to take the place in industry that "Information Technology" had in the late 1990s: businesses of all sizes and in all sectors, are recognizing how necessary it has become to develop predictive capabilities for continued profitability of their core competencies. To be effective, ML algorithms rely on high-quality training data – and not just any data, but data that is specific to the business problem that ML is applied to. Obtaining relevant training data can be very difficult for firms to do themselves, especially those early in their path towards incorporating ML into their operations. This problem is only further exacerbated, as businesses increasingly need to solve these prediction problems in real-time (e.g. a ride-share company setting prices, retailers/restaurants sending targeted coupons to clear inventory), which means that data gets “stale” quickly. Therefore, it is imperative that there are real-time market structures for the buying and selling of training data for ML. Further it is insufficient to view ML performance metrics (e.g. RMSE) in isolation of real-world applications; for example, a 10% increase in prediction accuracy means very different things for a hedge fund maximizing profits vs. a retailer decreasing inventory costs vs. a hospital trying to save lives. Hence the value of a dataset will necessarily have to consider more than simply the prediction accuracy it provides. Domain knowledge will be just as essential, if not more so, if we aim to view data as an asset and create a rigorous method to define its value.

In this work, we aim to create a data marketplace – a robust matching mechanism to efficiently buy and sell data while optimizing social welfare and maximizing revenue. While the monetization of data and pre-trained models is an essential focus by many industries and vendors today, there does not exist a market mechanism that can price data and match suppliers to vendors while still addressing the (computational and other) complexity associated with creating a market platform. The challenge in creating such a marketplace stems from the very nature of data as an asset: (i) it can be replicated at zero marginal cost; (ii) its value to a firm is inherently combinatorial (i.e. the value of a particular dataset depends on what other (potentially correlated) datasets are available); (iii) its value to a firm is dependent on which other firms get access to the same data; (iv) prediction tasks and the value of an increase in prediction accuracy vary widely between different firms, and so it is not obvious how to set prices for a collection of datasets with correlated signals; (v) finally, the authenticity and truthfulness of data is difficult to verify a priori without first applying to a prediction task. Our proposed marketplace will take a holistic view of this problem and provide an algorithmic solution combining concepts from statistical machine learning, economics of data with respect to various application domains, algorithmic market design, and mathematical optimization under uncertainty. We will discuss some examples motivating this work.

This is joint work with Anish Agarwal, Tuhin Sarkar, and Devavrat Shah.

11:15 am11:45 am

This talk considers trial-offer markets where customers can try products before buying them and where market makers display social
signals such as market shares. Consumer preferences are modelled by a multinomial logit with social influence and position bias and the social signal for a product is given by its current market share raised to power r. The talk reviews optimal (and non-optimal) policies for a variety of settings which depends on the value of r. The results make interesting connections between discrete choice models, assortment optimization, and Robbins-Monroe algorithms in stochastic approximation. A number of open will also be presented. They also show that various policies are able to control effectively the social signals to benefits both consumers and the markets, addressing a critical issue identified in prior seminal work.

11:45 am12:15 pm

Many modern data-intensive computational prob- lems either require, or benefit from distance or similarity data that adhere to a metric. The algorithms run faster or have better performance guarantees. Unfortunately, in real applications, the data are messy and values are noisy. The distances between the data points are far from satisfying a metric. Indeed, there are a number of different algorithms for finding the closest set of distances to the given ones that also satisfy a metric (sometimes with the extra condition of being Euclidean). These algorithms can have unintended consequences; they can change a large number of the original data points, and alter many other features of the data.

The goal of sparse metric repair is to make as few changes as possible to the original data set or underlying distances so as to ensure the resulting distances satisfy the properties of a metric. In other words, we seek to minimize the sparsity (or the l0 “norm”) of the changes we make to the distances subject to the new distances satisfying a metric. We give three different combinatorial algorithms to repair a metric sparsely. In one setting the algorithm is guaranteed to return the sparsest solution and in the other settings, the algorithms repair the metric. Without prior information, the algorithms run in time proportional to the cube of the number of input data points and, with prior information we can reduce the running time considerably.

1:45 pm2:15 pm

Network pricing games provide a framework for modeling real-world settings with two types of strategic agents: owners (operators) of the network and users of the network. Owners of the network post a price for usage of the link they own so as to attract users and maximize pro€fit; users of the network select routes based on price and level of use by other users. We point out that an equilibrium in these games may not exist, may not be unique and may induce an arbitrarily inefficient network performance.

Our main result is to observe that a simple regulation on the network owners market solves all three issues above. Specifically, if an authority could set appropriate caps (upper bounds) on the tolls (prices) operators can charge, then: the game among the link operators has a unique and strong Nash equilibrium and the users’ game results in a Wardrop equilibrium that achieves the optimal total delay. We call any price vector with these properties a great set of tolls. As a secondary objective, we want to compute great tolls that minimize total users’ payments and we provide a linear program that does this. We obtain multiplicative approximation results compared to the optimal total users’ payments for arbitrary networks with polynomial latencies of bounded degree, while in the single-commodity case we obtain a bound that only depends on the topology of the network. Lastly, we show how the same mechanism of setting appropriate caps on the allowable prices extends to the model of elastic demands.

2:15 pm2:45 pm

I will discuss what happens in a transportation network when travel times are uncertain and users who want to route between their respective sources and destinations are risk-averse.  I'll propose a measure of quantifying how much the degree of risk-aversion degrades the system performance (measured as the total expected delay of all users), separately from the effect of selfish routing choices of the users. I will conclude with the effect of user diversity on the quality of the resulting traffic equilibria, and specifically when diversity of user preferences improves outcomes in selfish routing.

2:45 pm3:15 pm

We provide a unified variational inequality framework for the study of fundamental properties of the Nash equilibrium in network games. We identify several conditions on the underlying network (in terms of spectral norm, infinity norm and minimum eigenvalue of its adjacency matrix) that guarantee existence, uniqueness, convergence and continuity of equilibrium in general network games with multidimensional and possibly constrained strategy sets. We delineate the relations between these conditions and characterize classes of networks that satisfy each of these conditions.

This is joint work with Francesca Parise.

3:45 pm4:15 pm

Many platforms are characterized by the fact that future user arrivals are likely to have preferences similar to users who were satisfied in the past. In other words, arrivals exhibit {\em positive externalities}. We study multiarmed bandit (MAB) problems with positive externalities. Our model has a finite number of arms and users are distinguished by the arm(s) they prefer. We model positive externalities by assuming that the preferred arms of future arrivals are self-reinforcing based on the experiences of past users. We show that classical algorithms such as UCB which are optimal in the classical MAB setting may even exhibit linear regret in the context of positive externalities. We provide an algorithm which achieves optimal regret and show that such optimal regret exhibits substantially different structure from that observed in the standard MAB setting.

Joint work with Virag Shah and Jose Blanchet.

4:15 pm5:00 pm

Distributed Energy Resources are being progressively adopted by consumers and are significantly changing how we monitor and optimize power distribution networks. Some major challenges are learning accurate models of distribution networks to incorporate in system management, optimizing resource coordination under realistic data ownership and communication constraints or in a model-free manner and deciding where to place resources under realistic system models. In this talk, we introduce various formulations inspired by practical applications and real data and review algorithms that address them.  We list various open algorithm design and performance analysis questions related to these problems.

Thursday, March 29th, 2018

9:30 am10:15 am

Ridesharing platforms use dynamic pricing (so-called "surge pricing") to raise prices when demand from riders outstrips drivers' availability.  This is intended to reduce rider demand and increase the number of drivers.   While short-run price increases do clearly influence riders, anecdotal evidence on whether they move drivers is mixed, with experienced drivers counselling against "chasing surge".  In this talk, we study this short-run impact of dynamic pricing on drivers' relocation decisions.  Using a natural experiment created by a surge pricing service outage affecting a portion of Uber's driver-partners, we study how visibility of the surge heatmap affects 1) drivers' decisions to relocate toward areas with higher prices and 2) drivers' earnings.  We demonstrate that surge heatmap visibility has a statistically significant impact on both outcomes.  Across 10 major cities it increases drivers' earnings, attracts drivers toward areas with higher surge prices, and explains 10%-60% of Uber drivers' self-positioning decisions.

10:45 am11:15 am

Ridesharing services have had a profound impact on transportation throughout the world and are poised to have an even greater impact in the future with the advent of self-driving cars. Ridesharing represents not only a technological innovation in transportation but also an economic innovation in how transportation is priced. In this talk, we examine the market design innovations in ridesharing and discuss pricing challenges for the industry as it matures.

11:15 am11:45 am

This talk presents inference, control, and game-theoretic algorithms developed to improve traffic flow in transportation networks. The talk will investigate various factors that intervene in decisions made by travelers in large scale urban environments. We will discuss disruptions in demand due to the rapid expansion of the use of “selfish routing”apps, and how they affect urban planning.  These disruptions cause congestion andmake traditional approaches of traffic management less effective. Game theoretic approaches to demand modeling will be presented. These models encompass heterogeneous users (some using routing information, some not) that share the same network and compete for the same commodity (capacity). Results will be presented for static loading, based on Nash-Stackelberg games, and in the context of repeated games, to account for the fact that routing algorithms learn the dynamics of the system over time when users change their behavior. The talk will present some potential remedies envisioned by planners, which range from incentivization to regulation.

11:45 am12:15 pm

The dispatch problems is a critical primitive of ridesharing systems: upon receiving a ride request, the platform must choose a nearby car to serve the ride, while ensuring that it maximizes vehicle availability in the future. The problem has close connections to optimal control of stochastic processing networks, as well as to online stochastic bipartite-matching and the k-server problems. I will talk about some recent work where we developed new state-independent and state-dependent control policies for this problem, with strong approximation guarantees. These results are based on using ideas from product-form Markov chains and large-deviations theory; I will briefly describe these new techniques, and outline some open questions and potential connections with other online decision problems.

Joint work with Daniel Freund and Thodoris Lykouris, and Yash Kanoria and Pengyu Qian.

1:45 pm2:15 pm

We consider the task of interpolating and forecasting a time series in the presence of noise and missing data. As the main contribution of this work, we introduce an algorithm that transforms the observed time series into a matrix, utilizes singular value thresholding to simultaneously recover missing values and de-noise observed entries, and performs linear regression to make predictions. We argue that this method provides meaningful imputation and forecasting for a large class of models: finite sum of harmonics (which approximate stationary processes), non-stationary sub-linear trends, Linear-Time-Invariant (LTI) systems, and their additive mixtures. In general, our algorithm recovers the hidden state of dynamics based on its noisy observations, like that of a Hidden Markov Model (HMM), provided the dynamics obey the above stated models. As an important application, it provides a robust algorithm for "synthetic control'' for causal inference.

2:15 pm2:45 pm

The theory of matching has a long history in economics, mathematics, and graph theory, with applications in many other fields. However, most of the research considers the static setting. In recent years, with the increased popularity of online advertising, various online matching models have been proposed that consider random arrivals of one population, while the other is static.
In this talk we consider extensions to fully dynamic bipartite matching models, in which both populations are given by a stochastic process. This line of work combines classical matching theory with queuing and inventory theory. The main departure from traditional queueing networks is that there is no notion of servers. Instead of ``service'', activities correspond to instantaneous matching of a particular unit of supply with a unit of demand. One of the most compelling applications is kidney paired donation by United Network for Organ Sharing.
We will present recent results for the FCFS policy in which each is matched to the earliest compatible unmatched item in the system. Then we will consider an infinite-horizon average-cost optimal control problem. A relaxation of the stochastic control problem will be discussed, which is found to be a special case of an inventory model, as treated in the classical theory of Clark and Scarf. The optimal policy for the relaxation admits a closed-form expression. Based on the policy for this relaxation, we propose a new matching policy. For a parameterized family of models in which the network load approaches capacity, this policy is shown to be approximately optimal, with bounded regret, even though the average cost grows without bound.

2:45 pm3:15 pm

We study the efficacy of match recommendation in reducing congestion in two-sided matching markets with private preferences. We measure congestion by the number of bits of information about their preferences agents  need to learn and communicate with others to form the final matching. Previous results suggest that a high level of congestion is inevitable under arbitrary preferences.  We show that when the unobservable component of agent preferences satisfy certain natural assumptions, it is possible to recommend potential matches such that 1) agents have negligible incentives to look beyond their set of recommended partners; 2) the market reaches an equilibrium outcome and 3) the overall communication overhead is small. The main idea is to only recommend partners whom the agent has a non-negligible chance of both liking and being liked by, as estimated by the observable component of preferences and prior expression of interest by agents on the other side based on the unobservable component of their preferences.

Joint work with Yash Kanoria, Mark Braverman, and Peng Shi.

3:45 pm4:15 pm

For decades power systems academics have proclaimed the need for real time prices to create a more efficient grid.   The rationale is economics 101:   proper price signals will lead to an efficient outcome.

In this talk we will review a bit of economics 101;  in particular, the definition of efficiency.   We will see that the theory supports the real-time price paradigm, provided we impose a particular model of rationality.    It is argued however that this standard model of consumer utility does not match reality:   the products of interest to the various "agents" are complex functions of time.  The product of interest to a typical consumer is only loosely related to electric power -- the quantity associated with price signals.

There is good news:  an efficient outcome is easy to describe, and we have the control technology to achieve it.  We need supporting market designs that respect dynamics and the impact of fixed costs that are inherent in power systems engineering, recognizing that we need incentives on many time-scales.   Most likely the needed economic theory will be based on an emerging theory of efficient and robust contract design.

4:15 pm5:00 pm

The rapid penetration of renewable resources into the electricity supply mix poses challenges for the efficient  dispatch of resources due to the inherent uncertainty and variability of such resources. Hence, in order to accommodate large amounts of renewables in is necessary to account for their output uncertainty and mobilize the flexibility of the system embedded in conventional generation, demand side resources and the transmission  grid. In this talk we formulate a stochastic unit commitment optimization in which we expand the traditional recourse actions that are available to mitigate the adverse effect of renewables variability. In particular we include in these recourse action, topology control through transmission switching and variable line ratings that account for the heating and cooling of transmission lines. We will demonstrate the potential gains from such recourse actions through test cases and discuss heuristic approaches for alleviating the computational burden resulting from such a formulations.