Talks
Spring 2015

Learning Sparse Data with Near-optimal Speed and Efficiency from a Variety of Measurement Processes

Friday, March 20th, 2015, 1:55 pm2:20 pm

Add to Calendar

Location: 

Calvin Lab Auditorium

Compressive sensing, in which a k-sparse signal in an ambient real space of much larger dimension (n) is to be reconstructed via linear measurements is a canonical example of sparse recovery problems. There are others — network tomography (in which the structure of a network is to be probed via measurements that are linear but are constrained by the (unknown) structure of the network), group testing (in which the sparse signal is binary, and the measurements are non-linear (disjunctive)), and phase-recovery (in which a real sparse signal is to be reconstructed via amplitude measurements). for each of these we provide algorithms that are information-theoretically near-optimal (up to at most poly-log(n) factors) in both computational- and sample-complexity, and can be made robust to noise. "Structured sparse measurements" play a key role in this framework, but there's a need to tweak the framework for the idiosyncrasies of each measurement process as well.