HomeResearchAdaptive Filtering(l_0)-LMS——an efficient and robust adaptive filtering algorithm for sparse system identification

\(l_0\)-LMS——an efficient and robust adaptive filtering algorithm for sparse system identification

1 Introduction

 

    \(l_0\)-LMS is an efficient and robust adaptive filtering algorithm for sparse system identification. It is first proposed in the paper \(l_0\) Norm Constraint LMS for Sparse System Identification, IEEE Signal Processing Letters, 16(9) 2009 by Yuantao Gu, Jian Jin, and Shunliang Mei. Then it is comprehensively analyzed in the paper Performance Analysis of \(l_0\) Norm Constraint Least Mean Square Algorithm, IEEE Transactions on Signal Processing, 60(5) 2012 by Guolong Su, Jian Jin, Yuantao Gu, and Jian Wang.

 

  

Figure 1: The setup of sparse system identification. 

    \(l_0\)-LMS incorporates a \(l_0\) norm penalty on the filter tap-weights to the cost function of traditional LMS,

$$\xi(n) = |e(n)|^2 + \gamma\|{\bf w}(n)\|_0.$$    After making use of some approximation, the tap-weight recursion of \(l_0\)-LMS is

$$w_i(n+1) = w_i(n) + \mu e(n) x(n-i) + \kappa g(w_i(n))$$

where

$$g(w_i(n)) = \left\{\begin{array}{ll}2\alpha^2w_i(n)-2\alpha{\rm sgn}(w_i(n)) & |w_i(n)|<1/\alpha;\\ 0 &\textrm{elsewhere}.\end{array}\right.$$

    When the unknown system response is sparse or near sparse, \(l_0\)-LMS converge much faster than traditional LMS and some variants specified for sparse system identification. Furthermore, it inherits the advantages of robustness, easy implementation and low computational cost from traditional LMS algorithm.

    In addition, the approximated \(l_0\) norm constraint can be readily adopted to improve most LMS variants. 

 

2  Behavior and Analysis

 

    The item of \(\kappa g(w_i(n))\)  in the recursion of \(l_0\)-LMS is called zero-point attraction, which reduces the distance between small tap-weights and the origin. Specifically, in the region of \((-1/\alpha, 1/\alpha)\), the smaller tap-weight is, the stronger attraction affects.   

 

Figure 2: The zero-point attraction of \(l_0\)-LMS.  

    Consequently, the adaption of small tap-weights in \(l_0\)-LMS can be described as

$$w(n+1) = w(n) + \textrm{gradient correction} + \textrm{zero-point attraction}$$

    The zero-point attraction accelerates the convergence rate of the tap-weights corresponding to small coefficients, with a byproduct of bias in steady state. However, based on theoretical analysis, the increase of mean square deviation in steady state can be readily controlled by parameter selection. Furthermore, the mean square convergence behavior of \(l_0\)-LMS can be exactly predicted.  

 

3  Parameters and Steady State Performance

 

    Besides step-size \(\mu\) in standard LMS, \(l_0\)-LMS introduces two parameters, \(\alpha\) and \(\kappa\). \(\alpha\) should be selected to include most small coefficients in the interval of \((-1/\alpha, 1/\alpha)\). The best choice of \(\kappa\) is

$$\kappa_{\rm{opt}}= \frac{\sqrt{\beta_3}}{2}\left(\sqrt[4]{\frac{\beta_1+\beta_2}{\beta_1-\beta_2}}-\sqrt[4]{\frac{\beta_1-\beta_2}{\beta_1+\beta_2}}\right),$$

where \(\beta_i (i=1,2,3)\) are some constants depend on the filter length, measurement noise power, step-size, and the number of non-zero coefficients. Please refer G. Su, 2012 for detail. When \(\kappa_{\rm {opt}}\) is adopted, the steady state mean square deviation could be reduced to

$$D^{\rm min}_\infty = \frac{\mu P_v L}{2-(L+2)\mu P_x}+\frac{\beta_3}{2}\left(\sqrt{\beta_1^2-\beta_2^2}-\beta_1\right).$$

where \(P_x\) and \(P_v\) denote the power of input signal and measurement noise, respectively.

    In practical implementations, we suggest to choose step-size \(\mu\) for \(l_0\)-LMS in a similar way as for standard LMS at first. Then one may choose \(\alpha\) as large as possible, based on prior information about the unknown system. Finally, one can experimentally choose \(\kappa\) by gradually increasing it from zero.

 

4  Numerical Results

 

    In the first experiment, the steady-state performance with respect to \(\kappa\) is considered. Referring to Fig. 3, the theoretical steady-state MSD of \(l_0\)-LMS is in good agreement with the experiment results. With the growth of \(\kappa\) from \(10^{-9}\), the steady-state MSD decreases at first, which means proper zero-point attraction is helpful for sufficiently reducing the amplitude of tap-weights corresponding to zero coefficients. On the other hand, larger \(\kappa\) results in more intensity of zero-point attraction item and increases the bias of small coefficients. Overlarge \(\kappa\) causes too much bias, thus deteriorates the overall performance. \(\kappa_{\rm {opt}}=3.75\times 10^{-7}\) produces the minimized steady-state MSD, which is marked with a square in Fig 1. Again, simulation result tallies with analytical value well.

 

Figure 3:Steady-state MSD of LMS and \(l_0\)-LMS (with respect to different \(\kappa\)), where SNR is 40dB and the solid square denotes \(\kappa_{\rm opt}\). 

  

    The second experiment demonstrates convergence process for various step sizes, with the comparison of LMS and \(l_0\)-LMS. Please refer to Fig. 4. Similar to traditional LMS, smaller step size yields slower convergence rate and less steady-state MSD. Therefore, the choice of step size should seek a balance between convergence rate and steady-state performance. Furthermore, the convergence rate of \(l_0\)-LMS is faster than that of LMS when their step sizes are identical.  

 

 

 Figure 4: MSD convergence of LMS and \(l_0\)-LMS with respect to different step sizes \(\mu\), where \(\kappa\) is chosen as optimal for \(l_0\)-LMS. 

 

5  Extensions

 

    As claimed earlier on this page, \(l_0\)-LMS can be readily modified to adopt other techniques developed for traditional LMS.Another extension of \(l_0\)-LMS is to be applied in sparse signal recovery for compressive sensing. Furthermore, merging the idea of zero-point attraction in Least Square solution yields a new and efficient algorithm called Zero-point Attracting Projection (ZAP), which is proposed in the paper A stochastic gradient approach on compressive sensing signal reconstruction based on adaptive filtering framework, IEEE Journal of Selected topics in Signal Processing, 4(2) 2010 by Jian Jin, Yuantao Gu, and Shunliang Mei. Please refer the webpage of ZAP. 

 

6  Download the code

 

    Download l0LMS v1.0, which contains \(l_0\)-LMS implemented in MATLAB. The code was written primarily by Yuantao Gu.Download

 

7  Download the paper

 

[1] Y. Gu, J. Jin, and S. Mei, \(l_0\) Norm Constraint LMS for Sparse System Identification, IEEE Signal Processing Letters, 16(9):774-777, 2009.

[2] G. Su, J. Jin, Y. Gu, and J. Wang, Performance Analysis of \(l_0\) Norm Constraint Least Mean Square Algorithm, IEEE Transactions on Signal Processing, 60(5): 2223-2235, 2012.

 

8  Contact the authors

 

    For any question and comment about the code and the website, please contact Yuantao Gu.