HomeResearchCompressive Sensing and Sparse Signal RecoveryOn the Null Space Constant for lp Minimization

On the Null Space Constant for lp Minimization

1 Introduction

The combinatorial optimization of $\ell_0$-minimization

\begin{equation}\label{l0min}    \underset{\bf x}{\operatorname{argmin}}\|{\bf x}\|_0\ \ \textrm{subject to}\ \ {\bf Ax}={\bf y}\end{equation}

is NP-hard and therefore cannot be solved efficiently. A standard method to solve this problem is by relaxing the non-convex discontinuous $\ell_0$ ``norm'' to the convex $\ell_1$ norm,

\begin{equation}\label{l1min}    \underset{\bf x}{\operatorname{argmin}}\|{\bf x}\|_1\ \ \textrm{subject to}\ \ {\bf Ax}={\bf y}.\end{equation}

It is theoretically proved that under some certain conditions, the optimum solution of \eqref{l1min} is identical to that of \eqref{l0min}.

Some works try to bridge the gap between $\ell_0$ ``norm'' and $\ell_1$ norm by non-convex but continuous $\ell_p$ ``norm'' $(0<p<1)$, and consider the $\ell_p$ minimization problem

\begin{equation}\label{lpmin}    \underset{\bf x}{\operatorname{argmin}}\|{\bf x}\|_p^p\ \ \textrm{subject to}\ \ {\bf Ax}={\bf y}\end{equation}


where $\|{\bf x}\|_p^p=\sum_{i=1}^N|x_i|^p$. Though finding the global optimal solution of $\ell_p$ minimization is still NP-hard, computing a local minimizer can be done in polynomial time. The global optimality of \eqref{lpmin} has been studied and various conditions have been derived, for example, those based on null space property.

Definition 1: For any $0\le p\le1$, define null space constant $\gamma(\ell_p,{\bf A},k)$ as the smallest quantity such that

\begin{equation}    \sum_{i\in S}|z_i|^p\le\gamma(\ell_p,{\bf A},k)\sum_{i\not\in S}|z_i|^p\end{equation}

holds for any set $S\subset\{1,2,\ldots,N\}$ with $\#S\le k$ and for any vector ${\bf z}\in \mathcal{N}({\bf A})$ which denotes the null space of ${\bf A}$.

It has been shown that for any $p\in[0,1]$, $\gamma(\ell_p,{\bf A},k)<1$ is a necessary and sufficient condition such that for any $k$-sparse ${\bf x}^*$ and ${\bf y}={\bf Ax}^*$, ${\bf x}^*$ is the unique solution of $\ell_p$ minimization. Therefore, $\gamma(\ell_p,{\bf A},k)$ is a tight quantity in indicating the performance of $\ell_p$ minimization $(0\le p\le 1)$ in sparse recovery. However, it has been shown that calculating $\gamma(\ell_p,{\bf A},k)$ is in general NP-hard, which makes it difficult to check whether the condition is satisfied or violated. Despite this, properties of $\gamma(\ell_p,{\bf A},k)$ are of tremendous help in illustrating the performance of $\ell_p$ minimization, e.g., non-decrease of $\gamma(\ell_p,{\bf A},k)$ in $p\in[0,1]$ shows that if $\ell_p$ minimization guarantees successful recovery of all $k$-sparse signal and $0\le q\le p$, then $\ell_q$ minimization also does.

In this work, we give some new properties of the null space constant $\gamma(\ell_p,{\bf A},k)$. Specifically, we prove that $\gamma(\ell_p,{\bf A},k)$ is strictly increasing in $k$ and is continuous in $p$. For random sensing matrix ${\bf A}$, the non-decrease of $\gamma(\ell_p,{\bf A},k)$ in $p$ can be improved to strict increase with probability 1. Based on them, the performance of $\ell_p$ minimization can be intuitively demonstrated and understood. This work has been published at IEEE Signal Processing Letters in the paper On the Null Space Constant for $\ell_p$ Minimization by Laming Chen and Yuantao Gu.

2 Some Properties of Null Space Constant

We begin with a lemma about $\gamma(\ell_p,{\bf A},k)$ which will play a central role in the theoretical analysis. The spark of a matrix ${\bf A}$, denoted as $\mathrm{Spark}({\bf A})$, is the smallest number of columns from ${\bf A}$ that are linearly dependent.

Lemma 1: Suppose $\mathrm{Spark}({\bf A})=L+1$. For $p\in[0,1]$,

1) $\gamma(\ell_p,{\bf A},k)$ is finite if and only if $k\le L$;

2) For $k\le L$, there exist $S'\subset\{1,2,\ldots,N\}$ with $\#S'\le k$ and ${\bf z}'\in \mathcal{N}({\bf A})\setminus\{\bf 0\}$ such that

    \begin{equation}\label{equ10}         \sum_{i\in S'}|z'_i|^p=\gamma(\ell_p,{\bf A},k)\sum_{i\not\in S'}|z'_i|^p     \end{equation}

First, we show the strict increase of $\gamma(\ell_p,{\bf A},k)$ in $k$.

Theorem 1: Suppose $\mathrm{Spark}({\bf A})=L+1$. Then for $p\in[0,1]$, $\gamma(\ell_p,{\bf A},k)$ is strictly increasing in $k$ when $k\le L$.

For any $p\in[0,1]$, we can define a set $\mathcal{K}_p({\bf A})$ of all positive integers $k$ that every $k$-sparse ${\bf x}^*$ can be recovered as the unique solution of $\ell_p$ minimization \eqref{lpmin} with ${\bf y}={\bf Ax}^*$. According to Theorem 1, $\mathcal{K}_p({\bf A})$ contains successive integers starting from $1$ to some integer $k_p^*({\bf A})$ and is possibly empty.

Now we turn to the properties of $\gamma(\ell_p,{\bf A},k)$ as a function of $p$. The following result reveals the continuity of $\gamma(\ell_p,{\bf A},k)$ in $p$.

Theorem 2: Suppose $\mathrm{Spark}({\bf A})=L+1$. Then for $k\le L$, $\gamma(\ell_p,{\bf A},k)$ is a continuous function in $p\in[0,1]$.


Figure 1: The figure shows $k_p^*({\bf A})$ as a function of $p$, where the argument $\bf A$ is omitted for concision.

For any $\bf A$, we can plot $k_p^*({\bf A})$ as a function of $p$, as shown in Fig. 1. For concision, we omit the argument $\bf A$ in the figure. It is obvious that $k_p^*({\bf A})$ is a step function decreasing from $k_0^*({\bf A})$ to $k_1^*({\bf A})$. Three facts needs to be pointed out. First, $k_p^*({\bf A})$ is right-continuous, which is an easy consequence of Theorem 2. Second, the points $(p_0,k_0)$ corresponding to the hollow circles in Fig. 1 satisfy $\gamma(\ell_{p_0},{\bf A},k_0)=1$. Third, for the $p$-axis $p_0$ of the points of discontinuity, the one-sided limits satisfy $\lim_{p\rightarrow p_0^-}k_p^*({\bf A})-\lim_{p\rightarrow p_0^+}k_p^*({\bf A})=1$. This can be proved by Theorem 1 that if $\gamma(\ell_{p_0},{\bf A},k_0)=1$, then $\gamma(\ell_{p_0},{\bf A},k_0-1)<1$.

 Finally, we introduce an important property of $\gamma(\ell_p,{\bf A},k)$ as a function of $p$ with regard to random matrix ${\bf A}$.

Theorem 3: Suppose the entries of ${\bf A}\in\mathbb{R}^{M\times N}$ are i.i.d. and satisfy a continuous probability distribution. Then for $k\le M$, $\gamma(\ell_p,{\bf A},k)$ is strictly increasing in $p\in[0,1]$ with probability one.


Figure 2: This figure shows a diagrammatic sketch of $\gamma(\ell_p,{\bf A},k)$ as a function of $p$ for different $k$ when $\bf A$ is a random matrix.

We can schematically show $\gamma(\ell_p,{\bf A},k)$ as a function of $p$ for different $k$ in Fig. 2. According to Theorem 1, these curves are strictly in order without intersections. Theorem 2 reveals that $\gamma(\ell_p,{\bf A},k)$ is continuous in $p$. For a random matrix $\bf A$ with i.i.d. entries satisfying a continuous probability distribution, $\gamma(\ell_p,{\bf A},k)$ is strictly increasing in $p$ with probability 1 by Theorem 3. According to the definition of $k_p^*({\bf A})$, the curves intersecting $\gamma(\ell_p,{\bf A},k)=1$ $(0\le p\le1)$ are those with $k_1^*({\bf A})+1\le k\le k_0^*({\bf A})$. According to the definition of $p_k^*({\bf A})$, the $p$-axis of these intersections are $p_{k_0^*}^*$, $p_{k_0^*-1}^*$, $\dots$, $p_{k_1^*+1}^*$ from left to right. Therefore, it is easy to derive Fig. 1 based on Fig. 2 when $\bf A$ is a random matrix.

3 Download the paper

L. Chen and Y. Gu, On the Null Space Constant for lp Minimization, IEEE Signal Processing Letters, 22(10):1600-1603, 2015.

4 Contact the authors 

For any question and comment about the code and the website, please contact Yuantao Gu and Laming Chen.