Input-sparsity Time Algorithms for Embeddings and Regression Problems
An algorithm runs in "input-sparsity time" if the running time in the RAM model is bounded above by a constant times the size of the input, for arbitrary or worst-case input. Somewhat remarkably, recent work in Randomized Numerical Linear Algebra has shown that fundamental problems such as least-squares and low-rank approximation can be solved to epsilon-accuracy in input-sparsity time. I will describe several related results in constructing low-distortion embeddings for Lp spaces in input-sparsity time, as well as the application of these results to solving problems such as least-squares, least-absolute deviations, and quantile regression. Of particular importance in making these theoretical ideas practical is understanding the tradeoffs between the running time to construct the embedding and the distortion quality of the embedding.
Attachment | Size |
---|---|
Input-sparsity Time Algorithms for Embeddings and Regression Problems (slides) | 502.57 KB |