**1 Introduction**

Zero-point Attracting Projection (ZAP) is an excellent non-convex algorithm we proposed to solve the above sparse recovery problem. ZAP is first proposed in the paper *A stochastic gradient approach on compressive sensing signal reconstruction based on adaptive filtering framework*, **IEEE Journal of Selected topics in Signal Processing, 4(2) 2010 by Jian Jin, Yuantao Gu, and Shunliang Mei**. The convergence performance of $\ell_1$-ZAP is analyzed in the paper *Proof of Convergence and Performance Analysis for Sparse Recovery via Zero-point Attracting Projection*, **IEEE Transactions on Signal Processing, 60(8), 2012 by Xiaohan Wang, Yuantao Gu, and Laming Chen**. A generalization of $\ell_0$-ZAP and its analysis is studied in the manuscript *A Non-convex Approach for Sparse Recovery with Convergence Guarantee*, which is submitted for possible publication at** IEEE Transactions on Signal Processing, by Laming Chen and Yuantao Gu**.

**2 Motivation**

One of the main tasks of Compressive Sensing (CS) is to design a sparse signal recovery algorithm to find the sparsest solution of an ill-conditioned system,

\(

{\bf y} = {\bf Ax},

\)

where ${\bf x}\in\mathbb{R}^N$, ${\bf A}\in\mathbb{R}^{M\times N}, M\ll N$, and ${\bf y}\in\mathbb{R}^M$ denote unknown sparse signal, sensing matrix, and measurement, respectively.

Based on our earlier work on adaptive filtering framework for sparse recovery, we noticed that the sparse solution can be searched iteratively in the solution space in order to speed up convergence. That is, the gradient correction term in $\ell_0$-LMS can be omitted. Please refer Fig.1, the initial vector of ${\bf x}_n$ is taken as the Least Square solution, which belongs to the solution space. Then in iterations, only the zero-point attraction term is used for updating the vector. The updated vector is replaced by the projection of the vector on solution space as soon as it departs from the solution space.

Figure 1: The iteration of $\ell_0$-LMS-based sparse recovery and $\ell_0$-ZAP.

**3 Zero-point Attracting Projection (ZAP) Algorithm**

In ZAP algorithm, the zero-point attracting term is used to update the iterative solution and then the updated iterative solution is projected to the solution space. The procedures of ZAP can be summarized as follows.

\begin{array}{l} \hline {\bf Initialization}: {\bf A}^{\dagger}={\bf A}^{\rm T}({\bf AA}^{\rm T})^{-1}, {\bf x}_0 = {\bf A}^{\dagger}{\bf y}, n=0;\\\hline {\bf while~} \text{stop condition is not satisfied} \\ {\bf ~~~}\text{1) Zero-point attraction:} \\ {\bf ~~~~~~~~~} \hat{\bf x}_{n+1}={\bf x}_n-\gamma\cdot \nabla {{\rm F}({\bf x}_n)}; \\ {\bf ~~~}\text{2) Adaptation:} \\ {\bf ~~~~~~~~~}{\bf x}_{n+1}=\hat{\bf x}_{n+1}+{\bf A}^{\dagger}({\bf y}-{\bf A}\hat{\bf x}_{n+1}); \\ {\bf ~~~}\text{3) Update the index: } n = n +1; \\ {\bf end} \\\hline \end{array}

where $\nabla {{\rm F}({\bf x}_n)}$ is the zero-point attracting term, and ${\rm F}({\bf x})$ is a function representing the sparse penalty of vector ${\bf x}$. Positive parameter $\gamma$ denotes the step-size in the step of zero-point attraction.

ZAP was proposed with a specification of $\ell_0$-norm constraint, termed $\ell_0$-ZAP, in which the approximate $\ell_0$ norm is utilized as the function ${\rm F}({\bf x})$. $\ell_0$-ZAP belongs to the non-convex optimization methods and has an outstanding performance beyond conventional algorithms. The penalty function is $\|{\bf x}\|_0$ and its gradient is approximated as

$$

\nabla {{\rm F}_{\ell_0}({\bf x})}\approx [{\rm f}(x_1),{\rm f}(x_2),\cdots,{\rm f}(x_N)]^{\rm T}

$$

and

$$

{\rm f}(x)=\left\{ \begin{array}{cl} -\alpha^2x-\alpha, & -\frac{1}{\alpha}\le x< 0; \\ -\alpha^2x+\alpha, & 0<x\le\frac{1}{\alpha}; \\ 0, & {\rm elsewhere}. \end{array} \right.

$$

In $\ell_1$-ZAP, the function ${\rm F}({\bf x})$ is the $\ell_1$ norm of ${\bf x}$ in the zero-point attracting term. Since it is non-differentiable, the gradient of ${\rm F}({\bf x})$ can be replaced by its sub-gradient. Then the ZA can be specified as

$$

\hat{\bf x}_{n+1}={\bf x}_n-\gamma\cdot {\rm sgn}({\bf x}_n).

$$

One may notice that $\ell_1$-ZAP is similar to the Projected Subgradient Algorithm. In fact, ZAP could be considered as a non-convex generalization of the latter. However, you may find in theoretical analysis and numerical simulation that the feature of non-convex penalty definitely modifies the algorithm behavior and improved its performance evidently.

**4 Convergence Analysis**

Though $\ell_1$-ZAP is similar to the Projected Subgradient Algorithm, its convergence is proved from a different approach. You may please refer for detail in X. Wang 2012, where we want to highlight *Lemma 1*, which indicates that the $\ell_1$ solution is not far from the $\ell_2$ solution and lay the groundwork for the proof.

**Lemma 1** *Suppose that ${\bf x}\in\mathbb{R}^N$ satisfies ${\bf y}={\bf Ax}$, with given ${\bf A}\in\mathbb{R}^{M\times N}$ and ${\bf y}\in\mathbb{R}^M$. ${\bf x}^*$ is the unique solution of*

$$

\min_{\bf x}\|{\bf x}\|_1, \quad\textrm{subject to }{\bf y=Ax}.

$$

*If $\|{\bf x}-{\bf x}^*\|_2$ is bounded by a positive constant $M_0$, then there exists a uniform positive constant $t$ depending on ${\bf A}$,${\bf y}$, and $M_0$, such that*

*$$ \|{\bf x}\|_1-\|{\bf x}^*\|_1\ge t\|{\bf x}-{\bf x}^*\|_2 $$*

*holds for arbitrary ${\bf x}$ satisfying ${\bf y}={\bf Ax}$.*

As to $\ell_0$-ZAP, it is analyzed recently in a general framework of weakly $\rho$-convex sparseness measure, which covers most known non-convex sparse recovery algorithms. You may please refer L. Chen 2012 for detail.

**5 Extension**

ZAP algorithm for a single sparse vector recovery has been successfully extended to various scenarios including multiple measurement vector (MMV), block sparse signal, and online recovery of sequential Compressive Sensing.

Furthermore, the approximated $l_0$ norm penalty is generalized to *sparseness measure* and $F$-minimization. We analyzed the relation between the exact recovery condition (ERC) and Robust recovery condition (RRC) for $F$-minimization from a topological viewpoint in a more general way. Meanwhile, an approximated projected generalized gradient (APGG) method is proposed and analyzed to provide high sparse recovery performance from a non-convex approach. You may please refer the webpages.

**6 Numerical Results**

For $N=1000$ and $S=50$, the probability of exact reconstruction for various number of measurements is shown as Fig.1. If the reconstruction SNR is higher than a threshold of $40$dB, the trial is regarded as exact reconstruction. The number of $M$ varies from $140$ to $320$ and each point in the experiment is repeated 200 times. It can be seen that for any fixed $M$ from $180$ to $260$, ZAP have higher probability of reconstruction than other algorithms, which means ZAP algorithms demand fewer measurements in signal reconstructions. The experiment also indicates that the performance of $\ell_0$-ZAP is better than $\ell_1$-ZAP, as we discussed.

Figure 2: Probability of exact reconstruction for various number of measurements, where $N=1000, S=50$.

For $N=1000$ and $M=200$, Fig.2 illustrates the probability of exact reconstruction for various sparsity $S$ from $25$ to $70$. All the algorithms are repeated 200 times for each value. The parameters of algorithms are the same as those in the previous experiment. $\ell_0$-ZAP has the highest probability for fixed sparsity $S$ and $\ell_1$-ZAP is the second beyond other conventional algorithms. The experiment indicates that ZAP algorithms can recover less sparse signals compared with other algorithms.

Figure 3: Probability of exact reconstruction for various sparsity, where $N=1000, M=200$.

The SNR performance is illustrated in Fig.3 with the measurement SNR varying from $5$dB to $30$dB and 200 times repeated for each value. The noise is zero-mean white Gaussian and added to the observed vector ${\bf y}$. The parameters are selected as $N=1000$, $M=200$ and $S=30$. The parameters of algorithms have the same choice with previous experiments. The reconstruction SNR and measurement SNR are the signal-to-noise ratios of reconstructed signal $\hat{\bf x}$ and measurement signal ${\bf y}$, respectively. $\ell_0$-ZAP outperforms other algorithms, while $\ell_1$-ZAP is almost the same as others. The experiment indicates that $\ell_0$-ZAP has a better performance against noise and $\ell_1$-ZAP does not have visible defects compared with other algorithms.

Figure 4: Reconstruction SNR versus measurement SNR, where $N=1000, M=200, S=30$.

**7 Download the code**

Download ZAP v1.0, which contains the mentioned algorithm implemented in MATLAB. The code was written primarily by Jian Jin.*Download*

**8 Download the paper**

[1] J. Jin, Y. Gu, and S. Mei, A stochastic gradient approach on compressive sensing signal reconstruction based on adaptive filtering framework, *IEEE Journal of Selected topics in Signal Processing*, 4(2):409-420, 2010.

[2] X. Wang, Y. Gu, and L. Chen, Proof of Convergence and Performance Analysis for Sparse Recovery via Zero-point Attracting Projection, *IEEE Transactions on Signal Processing*, 60(8): 4081-4093, 2012.

[3] Y. You, L. Chen, Y. Gu, W. Feng, and H. Dai, Retrieval of Sparse Solutions of Multiple-Measurement Vectors via Zero-point Attracting Projection, *Signal Processing*, 92(12): 3075-3079, 2012.

[4] J. Liu, J. Jin, and Y. Gu, Efficient Recovery of Block Sparse Signals via Zero-point Attracting Projection, *IEEE International Conference on Acoustics, Speech and Signal Processing* (ICASSP), 3333-3336, Mar. 25-30, 2012, Kyoto, Japan.

[5] Y. You, J. Jin, W. Duan, N. Liu, Y. Gu, and J. Yang, Zero-point attracting projection algorithm for sequential compressive sensing, *IEICE Electronics Express*, 9(4):314-319, 2012. pdf

[6] L. Chen and Y. Gu, A Non-convex Approach for Sparse Recovery with Convergence Guarantee, submitted for possible publication at *IEEE Transactions on Signal Processing*.

**9 Contact the authors**

For any question and comment about the code and the website, please contact Yuantao Gu.