HomeNewsListProfessor Mengdi Wang gave a talk on August 27th

Professor Mengdi Wang gave a talk on August 27th

Professor Mengdi Wang from Princeton University gave a talk on "Multi-Task Nonconvex Optimization with Total Budget Constraint: A Distributed Algorithm Using Monte Carlo Estimates" in Room 8-206, Rohm Building at 10:00, August 27th.  

 

Fig 1. Professor Mengdi Wang

Dr. Mengdi Wang is an Assistant Professor in the Department of Operations Research and Financial Engineering, Princeton University. Dr. Wang received her B.S. degree in Information Science and Control Theory from Tsinghua University in 2007 and her Ph.D. degree in Electrical Engineering and Computer Science (minor in Mathematics) from MIT in 2013. Her research interests include large-scale optimization problems that are driven by big data or random processes, and related stochastic methods; decision making problems that involve sequential decisions under uncertainty, and related methodologies such as dynamic programming and reinforcement learning; and applications in inference, machine learning, stochastic games, equilibrium problems, network and finance.

 

Title:

Multi-Task Nonconvex Optimization with Total Budget Constraint: A Distributed Algorithm Using Monte Carlo Estimates

Abstract:

Multi-task optimization is common in machine learning, filtering, communication and network problems. We focus on the nonconvex separable problem where the objective is the sum of $N$ individual utility functions subject to a total budget constraint. By leveraging the Lagrangian dual decomposition, the dual ascent method naturally applies and can be implemented distributively. For stochastic versions of multi-task problems, we propose a simulation-based dual ascent algorithm. According to a classical result from convex geometry, the average-per-task duality gap between the primal and dual problems is bounded by $O(1/N)$. This suggests that the nonconvex multi-task problem is getting “convexified” as the number of tasks increases. As a result, the proposed distributed dual algorithm recovers the optimal solution of the nonconvex problem with very small error.