HomeNewsListProf. John Wright visited on August 17, 2016

Prof. John Wright visited on August 17, 2016

Prof. John Wright from Columbia University visited our lab on August 17, 2016. He gave a talk on "Nonconvex Recovery of Low-Complexity Models" in Room 9-206, Rohm Building at 10:00AM.

 

John Wright is an Assistant Professor in the Electrical Engineering Department at Columbia University. He received his PhD in Electrical Engineering from the University of Illinois at Urbana-Champaign in 2009, and was with Microsoft Research from 2009-2011. His research is in the area of high-dimensional data analysis. In particular, his recent research has focused on developing algorithms for robustly recovering structured signal representations from incomplete and corrupted observations, and applying them to practical problems in imaging and vision. His work has received a number of awards and honors, including the 2009 Lemelson-Illinois Prize for Innovation for his work on face recognition, the 2009 UIUC Martin Award for Excellence in Graduate Research, and a 2008-2010 Microsoft Research Fellowship, and the Best Paper Award from the Conference on Learning Theory (COLT) in 2012, the 2015 PAMI TC Young Researcher Award.

 

Title:

Nonconvex Recovery of Low-Complexity Models

Abstract:

General nonconvex optimization problems are NP-hard. In applied disciplines, however, nonconvex problems abound, and heuristic algorithms are often surprisingly effective. The ability of nonconvex heuristics to find high-quality solutions for practical problems remains largely mysterious.

In this talk, I will describe a family of nonconvex problems which can be solved efficiently. This family has the characteristic structure that (1) every local minimizer is also global, and (2) the objective function has a negative directional curvature around all saddle points (“ridable saddle”). Natural (nonconvex) formulations for a number of important problems in signal processing and machine learning lie in this family, including the eigenvector problem, complete dictionary learning (CDL), generalized phase retrieval (GPR), orthogonal tensor decomposition, and various synchronization problems. This benign geometric structure allows a number of optimization methods to efficiently find a global minimizer, without special initializations. To corroborate this, I will describe the second-order trust-region method. This geometric approach to solving nonconvex problems has led to new computational guarantees for a number of practical problems, including CDL and GPR.

I will describe several open challenges in terms of both theory and algorithms, and give applications of these ideas to computer vision and the analysis of microscopy data.

Joint work with Ju Sun and Qing Qu (Columbia).

 

Download attachments: