HomeResearchCompressive Sensing and Sparse Signal RecoveryPerturbation Analysis of Sparse Signal Recovery Algorithms

Perturbation Analysis of Sparse Signal Recovery Algorithms

1 Introduction

In the field of compressive sensing (CS), an $s$-sparse signal ${\bf x}\in\mathbb{R}^{n}$ is linearly observed by some certain known sensing matrix ${\bf A}\in\mathbb{R}^{m\times n}$

$$
{\bf y}={\bf Ax},
$$

and the goal of sparse signal recovery is to estimate $\bf x$ from observation vector ${\bf y}\in\mathbb{R}^{m}$ with $m\ll n$. To apply the theory of CS in practice, the effect of noise and perturbation must be taken into consideration. These non-ideal factors can be mainly classified into three categories as shown in Figure 1.

  • Signal noise: background noise and not exactly sparse signal;
  • System perturbation: non-ideal effects in sensing devices;
  • Observation noise: quantization effect.


Figure 1: Non-ideal factors in Compressive Sensing.

Two practical scenarios are usually considered in CS applications. First, when $\bf A$ represents a system model, $\bf E$ denotes the precision error when the system is physically implemented. Therefore the whole sensing process is
\begin{equation}\label{former}
\tilde{\bf y}=({\bf A+E})({\bf x+e})+{\bf n},
\end{equation}
and only the nominal sensing matrix and contaminated measurement vector are available for recovery, i.e., $\hat{\bf x}=\textrm{R}({\bf A},\tilde{\bf y})$ where $\textrm{R}(\cdot)$ denotes a certain recovery algorithm.

In countless other problems, $\bf E$ is involved due to mis-modeling of the actual system $\bf A$. Therefore $\tilde{\bf A}={\bf A+E}$ and the sensing process is
\begin{equation}\label{latter}
\tilde{\bf y}={\bf A}({\bf x+e})+{\bf n}.
\end{equation}
Both the sensing matrix and measurement vector are contaminated, and the recovered solution is $\hat{\bf x}=\textrm{R}(\tilde{\bf A},\tilde{\bf y})$.

Based on these two models, we derive a series of theoretical results on the performance evaluation of sparse signal recovery. The performance of Orthogonal Matching Pursuit (OMP) is fully analyzed and discussed in the paper Perturbation Analysis of Orthogonal Matching Pursuit, IEEE Transactions on Signal Processing, 61(2), 2013 by Jie Ding, Laming Chen, and Yuantao Gu. The performance of greedy pursuits with replacement is analyzed and compared with oracle recovery in Oracle-order Recovery Performance of Greedy Pursuits with Replacement against General Perturbations, accepted for publication at IEEE Transactions on Signal Processing by Laming Chen and Yuantao Gu.

2 Theoretical Results

2.1 Analysis of Orthogonal Matching Pursuit

The essence of OMP algorithm is support recovery, i.e., detecting the locations of nonzero entries of $\bf x$. Based on Restricted Isometry Property (RIP), we derive some sufficient and necessary conditions to guarantee exact support recovery of OMP in different scenarios.

For signal noise, define
$$
\beta=\frac{\|{\bf e}\|_2}{\|{\bf x}\|_2},\quad\gamma=\frac{\|{\bf e}\|_1}{\sqrt{s}\|{\bf x}\|_2}
$$
to quantify the background noise-to-signal ratio. For system perturbation and observation noise, define
$$
\varepsilon_{\bf A}^{(s)}=\frac{\|{\bf E}\|_2^{(s)}}{\|{\bf A}\|_2^{(s)}},\quad\varepsilon_{\bf y}=\frac{\|{\bf n}\|_2}{\|\tilde{\bf y}-{\bf n}\|_2}
$$
to quantify the sensing noise-to-signal ratio and the observation noise-to-signal ratio, respectively, where $\|{\bf A}\|_2^{(s)}$ denotes the largest spectral norm taken over all $s$-column submatrices of $\bf A$. Assume $\varepsilon_{\bf A}^{(s)}<1$ and that the signal noise $\bf e$ will not change the support of $\bf x$, i.e., the set of the locations of $s$ largest entries of ${\bf x+e}$ in magnitude is the same as the support of $\bf x$. Define $t_0$ as the smallest magnitude among the $s$ largest entries of ${\bf x+e}$ in magnitude. The following theorem summarizes the main result in the latter model (\ref{latter}).

Theorem 1 Define the equivalent noise
$$
\varepsilon_{\rm{equ}}=\frac{\sqrt{2}}{1-\varepsilon_{\bf A}^{(s)}}\left(\varepsilon_{\bf A}^{(s)}+(1+\varepsilon_{\bf y})(1+\beta+\gamma)-1\right)\|{\bf x}\|_2.
$$
If the available sensing matrix $\tilde{\bf A}$ satisfies the RIP of order $s+1$ with isometry constant
$$
\tilde{\delta}_{s+1}\le\frac{1}{\sqrt{s}+1}-\frac{3}{\sqrt{s}+1}\frac{\varepsilon_{\rm{equ}}}{t_0},
$$
then OMP will recover the support set of $\bf x$ exactly from $\tilde{\bf A}$ and $\tilde{\bf y}$ in $s$ iterations, and the recovery error can be bounded as
$$
\|\hat{\bf x}-{\bf x}\|_2\le\frac{\varepsilon_{\rm{equ}}}{\sqrt{1-\tilde{\delta}_s}}
$$
where $\hat{\bf x}$ is the $s$-sparse signal recovered by OMP.

According to our analysis, though the exact recovery of an almost sparse signal $\bf x$ is no longer feasible, we reveal that the support set of the best $s$-term approximation of $\bf x$ can be recovered under reasonable conditions. By constructing an example it is also demonstrated that the sufficient conditions for support recovery of the best $s$-term approximation of $\bf x$ are rather tight. When $\bf x$ is strong-decaying, it is proved that the sufficient conditions for support recovery of the best $s$-term approximation of $\bf x$ can be relaxed, and the support can even be recovered in the order of the entries' magnitude. All these results can be analogized to the former model (\ref{former}). You may please refer to Ding'13 for more details.

2.2 Analysis of Greedy Pursuits with Replacement

The greedy pursuits with replacement include three algorithms: compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), and iterative hard thresholding (IHT), where the support estimate is re-evaluated in each iteration. Based on RIP, a unified form of the error bounds of these recovery algorithms is derived as follows. The definitions are the same as those in the previous subsection and the result is also based on the latter model (\ref{latter}).

Theorem 2 If the sensing matrix $\tilde{\bf A}$ satisfies
$$
\tilde{\delta}_{bs}\le c
$$
where $b$ and $c$ are two constants only related to specific algorithms, then after finite iterations, these algorithms estimate $\bf x$ with accuracy
$$
\frac{\|{\bf x}-\hat{\bf x}\|_2}{\|{\bf x}\|_2}\le \frac{\tilde{C}}{1-\varepsilon_{\bf A}^{(s)}}\left(\varepsilon_{\bf A}^{(s)}+(1+\varepsilon_{\bf y})(1+\beta+\gamma)-1\right),
$$

where $\tilde{C}$ is a quantity only related to $\tilde{\bf A}$.

The detailed form of the quantity $b$, $c$ and $\tilde{C}$ in Theorem 2 can be found in the paper Chen'13. As for the former model (\ref{former}), the result is similar. It is shown that the bound is almost linear in these non-ideal factors, which leads to the stability of these algorithms against general perturbations. By comparing the results with the upper and lower bounds of oracle recovery, i.e., least squares method with the locations of some largest entries in magnitude known a priori, it is demonstrated that the performances of these greedy algorithms and oracle recovery are of the same order. Therefore, it reveals that oracle-order recovery performance of greedy pursuits with replacement is guaranteed.

3 Numerical Results

A numerical simulation is performed to compare the relative error of greedy pursuits with replacement and oracle LS method for different sparsity levels. In each trial the matrix $\bf A$ of size $512\times 2048$ is randomly generated with independent Gaussian distributed entries with variance $\sigma^2=1/512$. For each pair of relative perturbations, $\tilde{\bf y}$ and $\tilde{\bf A}$ are generated according to the latter model (\ref{latter}). For fixed sparsity level $s=5$, $10$, and $15$, the relative perturbations $\varepsilon_{\bf y}$ and $\varepsilon_{\bf A}$ both vary from $0$ to $0.1$ with step size $0.005$. The simulation is conducted $1000$ trials to obtain the average relative error. The results are demonstrated in Figure 2, where they are represented for the same $s$ in the same row, and for the same recovery algorithm in the same column, with labels in the left and on the top, respectively. As can be seen from them, for the same $s$, the surface of oracle LS method lies below the others, while the surface of CoSaMP is above on the top. It verifies that the relative errors of CoSaMP, SP, and IHT are almost linear in the measurement perturbation as well as the system perturbation, and these greedy pursuits with replacement can provide oracle-order recovery performance. To display the results more intuitively, the average relative error is plotted versus $\varepsilon_{\bf A}$ for different $s$ when $\varepsilon_{\bf y}=0$, as in Figure 3. As can be seen, the relative error scales almost linearly as $\varepsilon_{\bf A}$, and the slopes of these curves increase as $s$ for the same recovery algorithm.


Figure 2: Average relative error ($1000$ trials) of greedy pursuits with replacement and oracle LS recovery versus different $\tilde{\bf y}$ and $\tilde{\bf A}$. The results are represented for the same $s$ in the same row, and for the same recovery algorithm in the same column, with labels in the left and on the top, respectively.


Figure 3: Average relative error ($1000$ trials) of greedy pursuits with replacement and oracle LS recovery versus $\varepsilon_{\bf A}$ for different $s$ when $\varepsilon_{\bf y}=0$.

4 Download the paper

[1] J. Ding, L. Chen, and Y. Gu, Perturbation Analysis of Orthogonal Matching Pursuit, IEEE Transactions on Signal Processing, 61(2):398-410, 2013.
[2] L. Chen and Y. Gu, Oracle-order Recovery Performance of Greedy Pursuits with Replacement against General Perturbations, accepted for publication at IEEE Transactions on Signal Processing.

5 Contact the authors

For any question and comment about the code and the website, please contact Yuantao Gu and Laming Chen.