We will sketch a recent result that provides an O(log^2(k)*log(n)) competitive algorithm for the k-server problem.
Joint work with Sebastien Bubeck, Michael B. Cohen, James Lee, and Yin-Tat Lee.
Monday, December 4th, 2017
Traditional online algorithms deal typically with (1) inputs over a sequence with a goal of minimizing the total cost; (2) inputs over a time where the goal is to minimize some function of the time. Following the work of Emek, Kutten and Wattenhofer (STOC'16) we consider problems where the goal is to minimize the cost plus the waiting time (delay) until requests are served. In particular we deal with online min-cost matching with delays (matching players in game platforms) and online two sided min-cost matching with delays (matching customers to drivers in transportation platforms such as Uber/Lyft). We provide logarithmic competitive algorithms for these problems.
Based on joint work with Chiplunkar, Kaplan (SODA’ 17), Ashlagi, Charikar, Chiplunkar, Geri, Kaplan, Makhijani, Wang, Wattenhofer (APPROX’ 17).
This talk considers the online machine minimization problem, a basic real time scheduling problem. The setting for this problem consists of n jobs that arrive over time, where each job has a deadline by which it must be completed. The goal is to design an online scheduler that feasibly schedules the jobs on a nearly minimal number of machines. An algorithm is c-machine optimal if the algorithm will feasibly schedule a collection of jobs on cm machines if there exists a feasible schedule on m machines. For over two decades the best known result was a O(log P)-machine optimal algorithm, where P is the ratio of the maximum to minimum job size. In a recent breakthrough, a O(log m)-machine optimal algorithm was given. In this paper, we exponentially improve on this recent result by giving a O(log log m)-machine optimal algorithm.
We consider a setting where selfish agents want to schedule jobs on related machines. The agent submitting a job picks a server that minimizes a linear combination of the server price and the resulting response time for that job on the selected server. The manager's task is to maintain server prices to (approximately) optimize the maximum response time, which is a measure of social good. We show that the existence of a pricing scheme with certain competitiveness is equivalent to the existence of a monotone immediate-dispatch algorithm. Our main result is a monotone immediate-dispatch algorithm that is O(1)-competitive with respect to the maximum response time.
We formulate and study an optimization problem that arises in the energy management of data centers and, more generally, multiprocessor environments. Data centers host a large number of heterogeneous servers. Each server has an active state and several standby/sleep states with individual power consumption rates. The demand for computing capacity varies over time. Idle servers may be transitioned to low-power modes so as to rightsize the pool of active servers. The goal is to find a state transition schedule for the servers that minimizes the total energy consumed. On a small scale the same problem arises in multi-core architectures with heterogeneous processors on a chip. One has to determine active and idle periods for the cores so as to guarantee a certain service and minimize the consumed energy.
For this power/capacity management problem, we develop two main results. We use the terminology of the data center setting. First, we investigate the scenario that each server has two states, i.e. an active state and a sleep state. We show that an optimal solution, minimizing energy consumption, can be computed in polynomial time by a combinatorial algorithm. The algorithm resorts to a single-commodity min-cost flow computation. Second, we study the general scenario that each server has an active state and multiple standby/sleep states. We devise a $\tau$-approximation algorithm that relies on a two-commodity min-cost flow computation. Here $\tau$ is the number of different server types. A data center has a large collection of machines but only a relatively small number of different server architectures. Moreover, in the optimization one can assign servers with comparable energy consumption to the same class. Technically, both of our algorithms involve non-trivial flow modification procedures. In particular, given a fractional two-commodity flow, our algorithm executes advanced rounding and flow packing routines.
We consider the problem of designing a packet-level congestion control and scheduling policy for datacenter networks. Current datacenter networks primarily inherit the principles that went into the design of Internet, where congestion control and scheduling are distributed. While distributed architecture provides robustness, it suffers in terms of performance. Unlike Internet, data center is fundamentally a ""controlled"" environment. This raises the possibility of designing a centralized architecture to achieve better performance. Recent solutions such as Fastpass and Flowtune have provided the proof of this concept. This raises the question: what is theoretically optimal performance achievable in a data center?
We propose a centralized policy that guarantees a per-flow end-to-end flow delay bound of O(#hops*flow-size/gap-to-capacity). Effectively such an end-to-end delay will be experienced by flows even if we removed congestion control and scheduling constraints as the resulting queueing networks can be viewed as the classical reversible multi-class queuing network, which has a product-form stationary distribution. In the language of Harrison et al., we establish that baseline performance for this model class is achievable.
Indeed, as the key contribution of this work, we propose a method to emulate such a reversible queuing network while satisfying congestion control and scheduling constraints. Precisely, our policy is an emulation of Store-and-Forward (SFA) congestion control in conjunction with Last-Come-First-Serve Preemptive-Resume (LCFS-PR) scheduling policy.
We study the dynamic matching problem in the incremental setting, where we are given a sequence of edge insertions and aim at maintaining a near-optimal matching of the graph with small update time. We present a deterministic algorithm for bipartite matching and a randomized algorithm for matching in general graphs. For constant ε>0, both algorithms obtain a (1+ε) approximation and run in amortized constant time per edge insertion.
Joint work with Fabrizio Grandoni, Anupam Gupta, Piotr Sankowski and Chris Schwiegelshohn.
In this talk, we will discuss the classic online matching problem, introduced in the seminal work of Karp, Vazirani and Vazirani (STOC 1990), in regular graphs. For such graphs, an optimal deterministic algorithm, as well as efficient algorithms under stochastic input assumptions were known. In this work, we present a novel randomized algorithm with competitive ratio tending to one on this family of graphs, under adversarial arrival order. Our main contribution is a novel algorithm which achieves competitive ratio $1-O(\sqrt{\log d}/\sqrt{d})$ in expectation on $d$-regular graphs. In contrast, we show that all previously-studied online algorithms have competitive ratio strictly bounded away from one. Moreover, we show the convergence rate of our algorithm's competitive ratio to one is nearly tight, as no algorithm achieves competitive ratio better than $1-O(1/\sqrt{d})$. Finally, we show that our algorithm yields a similar competitive ratio with high probability, as well as guaranteeing each vertex a probability of being matched tending to one.
Joint work with David Wajc (Carnegie Mellon University)
Tuesday, December 5th, 2017
We consider dynamic two-sided markets, where buyers and sellers arrive over time following Poisson processes. Each agent has private value and patience level for receiving/providing service, and a known location in an underlying metric space. The platform posts prices and wages at nodes in the metric space, as well as choose agents for matching to simultaneously ensure three orthogonal objectives: 1) maximizing profit; 2) minimizing maximum metric distance of any match; and 3) minimizing abandonment probability of an accepted agent. We consider a natural class of scheduling policies, and develop LP rounding techniques that find approximately optimal policies.
Consider the seller's problem of finding optimal prices for her (divisible) goods when faced with a set of consumers, given that she can only observe their purchased bundles at the set prices, i.e., access to only revealed preferences (demand oracle). We study both social welfare and profit maximization under this setting, assuming the concave valuation function of consumers and the convex production cost function of the seller, which are standard assumptions in economics.
A recent series of works (Roth et al., 2016, 2017) studied this problem for various special cases of valuation and cost functions while making no assumption on the demand oracle. The main difficulty they observe is that the resulting optimization problem is non-convex in prices. For social-welfare maximization, we obtain a natural interpretation of the revealed preference feedback in the dual optimization problem, and thereby obtain a simple gradient-based algorithm. This simplifies and improves on the series of works of Roth et al. (2016, 2017) at least by a quadratic factor in terms of query complexity.
Second, we study social-welfare maximization in the online setting, where consumers arrive one-by-one in a random order, a.k.a. the secretary model. For the case where consumer valuation can be an arbitrary continuous function, we design a price posting mechanism that achieves average expected social welfare up to an additive factor of 1/sqrt(m) of maximum social welfare, where m is the number of consumers. Third, for profit maximization, we obtain the first FPTAS when valuation and cost functions are separable, with a matching lower bound. For the non-separable case, we show that no PTAS exists, even when only the cost function is non-separable.
We consider the problem of makespan minimization when job sizes are stochastic. The goal is to find a fixed assignment of jobs to machines so as to minimize the expected makespan. In the special case of identical machines, a constant-factor approximation algorithm has long been known, but the problem remained open even for the next-harder related machines case. We obtain the first constant-factor approximation in the setting of general unrelated machines. We also obtain algorithms for two generalizations (i) a constant-factor approximation for the ``budgeted'' makespan problem, and (ii) an O(q / log q)-approximation for q-norm minimization.
This is joint work with Anupam Gupta, Amit Kumar and Xiangkun Shen.
The talk addresses a classical problem in the area of scheduling, namely minimizing the total weighted completion time of non-preemptive jobs on a set of unrelated machines. Uncertainty enters the model by assuming that job processing times are stochastic. In order to obtain constant factor approximation algorithms for that problem, prior work required sophisticated linear or convex programming relaxations for the assignment of jobs to machines. In contrast, we analyze a purely combinatorial, online algorithm. Maybe surprisingly, we show how to derive performance bounds for that algorithm that are of the same order of magnitude as those of earlier work, while our results are the first for an online setting. The analysis is based on dual fitting techniques. (Joint work with V. Gupta, B. Moseley, Q. Xi)
We will cover the basics of prophet secretary, latest results and their applications in this talk.
The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items because of their applications in mechanism design. The most important of these generalizations are to ma- troids and to combinatorial auctions (extends bipartite matching). Kleinberg-Weinberg [STOC 2012] and Feldman et al. [SODA 2015] show that for adversarial arrival order of random variables the optimal prophet inequalities give a 1/2-approximation. For many settings, however, it’s conceivable that the arrival order is chosen uniformly at random, akin to the secretary problem. For such a random arrival model, we improve upon the 1/2-approximation and obtain (1−1/e)- approximation prophet inequalities for both matroids and combinatorial auctions. This also gives improvements to the results of Yan [SODA 2011] and Esfandiari et al. [ESA 2015] who worked in the special cases where we can fully control the arrival order or when there is only a single item.
Our techniques are threshold based. We convert our discrete problem into a continuous setting and then give a generic template on how to dynamically adjust these thresholds to lower bound the expected total welfare.
Joint work with Soheil Ehsani, MohammadTaghi Hajiaghayi, and Sahil Singla. To appear in SODA 2018.
Modern science and engineering is driven by massively large data sets and its advance heavily relies on massively parallel computing platforms such as Spark, MapReduce, and Hadoop. Theoretical models have been proposed to understand the power and limitations of such platforms. Recent study of developed theoretical models has led to the discovery of new algorithms that are fast and efficient in both theory and practice, thereby beginning to unlock their underlying power. Given recent promising results, the area has turned its focus on discovering widely applicable algorithmic techniques for solving problems efficiently.
In this paper we make progress towards this goal by giving a principled framework for simulating sequential dynamic programs in the distributed setting. In particular, we identify two key properties, monotonicity and decomposability, which allow us to derive efficient distributed algorithms for problems possessing the properties. We showcase our framework by considering several core dynamic programming applications, Longest Increasing Subsequence, Optimal Binary Search Tree, and Weighted Interval Selection.
We present an acceleration framework for packing linear programming problems where the amount of data available is limited, i.e., where the number of constraints is small compared to the variable dimension. The framework can be used as a black box to speed up linear programming solvers dramatically, by two orders of magnitude in our experiments. We present worst-case guarantees on the quality of the solution and the speedup provided by the algorithm, showing that the framework provides an approximately optimal solution while running the original solver on a much smaller problem. The framework can be used to accelerate exact solvers, approximate solvers, and parallel/distributed solvers. Further, it can be used for both linear programs and integer linear programs.
Joint work with Palma London, Adam Wierman and Hanling Ye
Wednesday, December 6th, 2017
Traditional multi-armed bandit models posit that the payoff distribution of each action (or "arm") is stationary over time, and hence that the goal of learning is to identify the arm with the highest expected payoff and choose that one forever after. However, in many applications the efficacy of an action depends on the amount of time that has elapsed since it was last performed. Examples arise in precision agriculture, online education, and music recommendations. In this talk we introduce a generalization of the multi-armed bandit problem that models such applications. In the course of analyzing algorithms for this problem, we will encounter some interesting combinatorial questions about coloring the integers subject to bounds on the sizes of subintervals that exclude a given color.
This talk is based on joint work with Nicole Immorlica.
There are four challenges in Topic Modeling: (i) Scaling up to very large data sets of billions of tokens (ii) Learning models with 100K + topics and (iii) doing so with a (provable) polynomial time algorithm with good error bounds which also (iv) performs well empirically on real corpora. We demonstrate all of the above starting with a new model.
Joint work with : Chiranjib Bhattacharyya and Harsha Simhadri
Algorithm configuration methods have achieved much practical success, but to date have not been backed by meaningful performance guarantees. We address this gap with a new algorithm configuration framework, Structured Procrastination. With high probability and nearly as quickly as possible in the worst case, our framework finds an algorithm configuration that provably achieves near optimal performance. Further, its running time requirements asymptotically dominate those of existing methods.
Clustering is a fundamental technique in data analysis: the objective is to identify a set of k points (centroids) that minimize the clustering cost (a function of the distances of points to the nearest centroid). With very large data sets, we seek efficient constructions of small samples that act as surrogates to the full data. Existing methods that provide quality guarantees are either worst-case, unable to benefit from structure of real data set, or make explicit assumptions on the structure. We show here how to avoid both these pitfalls using adaptive algorithms and demonstrate experimentally large gains of our adaptive versus the worst-case constructions.
Consider a network design application where we wish to lay down a minimum-cost spanning tree in a given graph; however, we only have stochastic information about the edge costs. To learn the precise cost of any edge, we have to conduct a study that incurs a price. Our goal is to find a spanning tree while minimizing the disutility, which is the sum of the tree cost and the total price that we spend on the studies. In a different application, each edge gives a stochastic reward value. Our goal is to find a spanning tree while maximizing the utility, which is the tree reward minus the prices that we pay.
Situations such as the above two often arise in practice where we wish to find a good solution to an optimization problem, but we start with only some partial knowledge about the parameters of the problem. The missing information can be found only after paying a probing price, which we call the price of information. What strategy should we adopt to optimize our expected utility/disutility?
A classical example of the above setting is Weitzman’s “Pandora’s box” problem where we are given probability distributions on values of n independent random variables. The goal is to choose a single variable with a large value, but we can find the actual outcomes only after paying a price. Our work is a generalization of this model to other combinatorial optimization problems such as matching, set cover, facility location, and prize-collecting Steiner tree. We give a technique that reduces such problems to their non-price counterparts, and use it to design exact/approximation algorithms to optimize our utility/disutility. Our techniques extend to situations where there are additional constraints on what parameters can be probed or when we can simultaneously probe a subset of the parameters.
We introduce the stochastic graph exploration problem. The input is an undirected graph G with a source vertex s, stochastic edge costs, and deterministic rewards of vertices. The goal is to find a set F of edges of total cost at most 1, such that the subgraph of G induced by F is connected, contains s, and has maximum total reward. The set F is constructed by adding edges one by one and after each step the subgraph of G induced by F has to be connected and contain s. When an edge is chosen, its actual cost is revealed. The chosen edge has to be added to F, unless the total cost of edges in F would exceed 1, in which case the process finishes.
We show that this problem generalizes the stochastic knapsack problem. However, as opposed to knapsack problem, the adaptivity gap of the stochastic graph exploration problem is superconstant. On the positive side, we show bi-criteria approximation algorithms both for trees and general graphs.
The problem of monotonicity testing on the hypercube is as follows. Given a Boolean function f over the d-dimensional hypercube, we wish to design an algorithm to distinguish between f being monotone and f being far from monotone. Old results proved the existence of O(d) algorithms. Recent advances have shown the existence of o(d) algorithms.
But rather than the algorithms themselves, these results uncover a surprising phenomenon. Classic isoperimetric results over the hypercube by Margulis and Talagrand can be converted (with significant effort) into ""directed"" isoperimetric analogues. The role of the variance of f is replaced by the distance to monotonicity.
We will give a survey of these results, and talk about some new directed isoperimetric results for hypergrids. These new results attempt to understand the core reason why such isoperimetric results hold. These results, in turn, lead to new monotonicity testers of the richer class of hypergrid domains.
Motivated by the use of non-truthful auctions in practice (e.g. GSP, GFP) where bidders often offload their bidding strategy to a black-box optimizer, we consider the problem of selling a single item to a learning buyer. More specifically, each round the buyer draws a valuation v from D, chooses a bid b, and the seller then selects an allocation probability x and price \leq xb. The buyer's choice of bids will satisfy no-regret (in the sense that their final utility will be competitive with the best fixed bid mapping b(\cdot) in hindsight). We show the following:
1) There exists a no-regret learning strategy guaranteeing that the seller cannot extract revenue exceed the monopoly revenue from D (i.e. the best thing the seller can do against this strategy is to just post the same price every round).
2) If the bidder instead uses typical learning strategies like EXP3 or MWU, the seller has a strategy that can extract an unbounded multiplicative increase in revenue beyond the monopoly revenue.
Joint work with Mark Braverman, Jieming Mao, and Jon Schneider.