Ph.D candidate Qing Qu from Columbia University visited our lab on July 8th. He gave a talk on "Finding a Sparse Vector in a Subspace: Linear Sparsity Using Alternating Directions" in Room 8-206, Rohm Building at 10:00.

Fig.1 Qing Qu is giving the talk

Qing Qu received his bachelor degree in Tsinghua University in 2011, and got his master degree in Johns Hopkins University in 2013, both in Electrical Engineering. He did a research intern with Dr. Nasser Nasrabadi in the U.S. Army Research Lab from 2012 to 2013. Currently, he is a Ph.D candidate in the Electrical Engineering Department of Columbia University, under the supervision of Prof. John Wright. His research interest lies in sparse representation, large-scale optimization, and machine learning.

###### Title:

Finding a Sparse Vector in a Subspace: Linear Sparsity Using Alternating Directions

###### Abstract:

We consider the problem of recovering the sparsest vector in an n-dimensional subspace. This problem can be considered as a homogeneous variant of the sparse recovery problem, and finds applications in sparse dictionary learning, sparse PCA, and other problems in signal processing and machine learning. Simple convex heuristics for this problem provably break down when the fraction of nonzero entries in the target sparse vector substantially exceeds $1/ \sqrt(n)$. In contrast, we exhibit a relatively simple nonconvex approach based on alternating directions, which provably succeeds even the fraction of nonzero entries is $\omega(1)$. To the best of our knowledge, this is the first practical algorithm to achieve this linear scaling. This result assumes a planted sparse vector model, in which the target sparse vector is embedded in an otherwise random subspace. Empirically, our proposed algorithm also succeeds in more challenging data models arising, e.g., for sparse dictionary learning.