HomePublications

Preprint

  • Theory of Spectral Method for Union of Subspaces-Based Random Geometry Graph
    G. Li and Y. Gu
    [Abstract]

    Spectral Method is a commonly used scheme to cluster data points lying close to Union of Subspaces by first constructing a Random Geometry Graph, called Subspace Clustering. This paper establishes a theory to analyze this method. Based on this theory, we demonstrate the efficiency of Subspace Clustering in fairly broad conditions. The insights and analysis techniques developed in this paper might also have implications for other random graph problems. Numerical experiments demonstrate the effectiveness of our theoretical study.

    [arXiv] [pdf]
  • Compressed Subspace Learning Based on Canonical Angle Preserving Property
    Y. Jiao, G. Li, and Y. Gu
    [Abstract]

    Union of Subspaces (UoS) is a popular model to describe the underlying low-dimensional structure of data. The fine details of UoS structure can be described in terms of canonical angles (also known as principal angles) between subspaces, which is a well-known characterization for relative subspace positions. In this paper, we prove that random projection with the so-called Johnson-Lindenstrauss (JL) property approximately preserves canonical angles between subspaces with overwhelming probability. This result indicates that random projection approximately preserves the UoS structure. Inspired by this result, we propose a framework of Compressed Subspace Learning (CSL), which enables to extract useful information from the UoS structure of data in a greatly reduced dimension. We demonstrate the effectiveness of CSL in various subspace-related tasks such as subspace visualization, active subspace detection, and subspace clustering.

    [arXiv] [pdf]
  • Unraveling the Veil of Subspace RIP Through Near-Isometry on Subspaces
    X. Xu, G. Li, and Y. Gu
    [Abstract]

    Dimensionality reduction is a popular approach to tackle high-dimensional data with low-dimensional nature. Subspace Restricted Isometry Property, a newly-proposed concept,has proved to be a useful tool in analyzing the effect of dimensionality reduction algorithms on subspaces. In this paper, we provide a characterization of subspace Restricted Isometry Property, asserting that matrices which act as a near-isometry on low-dimensional subspaces possess subspace Restricted Isometry Property. This points out a unified approach to discuss subspace Restricted Isometry Property. Its power is further demonstrated by the possibility to prove with this result the subspace RIP for a large variety of random matrices encountered in theory and practice, including subgaussian matrices, partial Fourier matrices, partial Hadamard matrices, partial circulant/Toeplitz matrices, matrices with independent strongly regular rows (for instance, matrices with independent entries having uniformly bounded $4+\epsilon$ moments), and log-concave ensembles. Thus our result could extend the applicability of random projections in subspace-based machine learning algorithms including subspace clustering and allow for the application of some useful random matrices which are easier to implement on hardware or are more efficient to compute.

    [arXiv] [pdf]
  • Lower Bound for RIP Constants and Concentration of Sum of Top Order Statistics
    G. Li, X. Xu, and Y. Gu
    [Abstract]

    Restricted Isometry Property (RIP) is of fundamental importance in the theory of compressed sensing and forms the base of many exact and robust recovery guarantees in this field. Quantitative description of RIP involves bounding the so-called RIP constants of measurement matrices. In this respect, it is noteworthy that most results in literature concerning RIP are upper bounds of RIP constants, which can be interpreted as theoretical guarantee of successful sparse recovery. On the contrary, the land of lower bounds for RIP constants remains uncultivated. Lower bounds of RIP constants, if exist, can be interpreted as the fundamental limit aspect of successful sparse recovery. In this paper, the lower bound of RIP constants Gaussian random matrices are derived, along with a guide for generalization to sub-Gaussian random matrices. This provides a new proof of the fundamental limit that the minimal number of measurements needed to enforce the RIP of order $s$ is $\Omega(s\log({\rm e}N/s))$, which is more straight-forward than the classical Gelfand width argument. Furthermore, in the proof we propose a useful technical tool featuring the concentration phenomenon for top-$k$ sum of a sequence of i.i.d. random variables, which is closely related to mainstream problems in statistics and is of independent interest.

    [arXiv] [pdf]
  • Walkman: A Communication-Efficient Random-Walk Algorithm for Decentralized Optimization
    X. Mao, K. Yuan, Y. Hu, Y. Gu, A. H. Sayed, and W. Yin
    [Abstract]

    This paper addresses consensus optimization problems in a multi-agent network, where all agents collaboratively find a minimizer for the sum of their private functions. We develop a new decentralized algorithm in which each agent communicates only with its neighbors. State-of-the-art decentralized algorithms use communications between either all pairs of adjacent agents or a random subset of them at each iteration. Another class of algorithms uses a random walk incremental strategy, which sequentially activates a succession of nodes; these incremental algorithms require diminishing step sizes to converge to the solution, so their convergence is relatively slow. In this work, we propose a random walk algorithm that uses a fixed step size and converges faster than the existing random walk incremental algorithms. Our algorithm is also communication efficient. Each iteration uses only one link to communicate the latest information for an agent to another. Since this communication rule mimics a man walking around the network, we call our new algorithm Walkman. We establish convergence for convex and nonconvex objectives. For decentralized least squares, we derive a linear rate of convergence and obtain a better communication complexity than those of other decentralized algorithms. Numerical experiments verify our analysis results.

    [arXiv]
  • Learning Proximal Operator Methods for Nonconvex Sparse Recovery with Theoretical Guarantee
    C. Yang, Y. Gu, B. Chen, H. Ma, and H.C. So
    [Abstract]

    sparse recovery has attracted considerable attention in signal processing community these years, because of its widespread usage in many applications. Though lots of convex and nonconvex methods have been proposed for this topic, most of them are computationally expensive. Recently, learned iterative shrinkage-thresholding algorithm (LISTA) and its variants are incorporated to sparse recovery, which are based on specific deep neural networks designed by unfolding ISTA. However, the problem they are able to solve is regularized by $\ell_1$ norm, which is less effective at promoting sparseness than some nonconvex sparsity-inducing penalties (SIPs). Motivated by the fact that the problems regularized by these SIPs can be solved by proximal operator methods, we devise a network named learning proximal operator method (LePOM) for sparse recovery. For LePOM on a general class of SIPs, a necessary condition of its convergence is established. Based on this condition, a simplified network is proposed with less parameters. Theoretical analysis of this network inspires us to further reduce the number of parameters and arrive at proposing Analytical LePOM (ALePOM). ALePOM determines most parameters by solving an optimization problem and significantly reduces the number of parameters. Theoretical analysis shows if the signal is sufficiently sparse, ALePOM converges linearly. Simulations confirm our analyses and demonstrate that the proposed methods outperform state-of-the-art sparse recovery algorithms and neural network-based methods.

  • Linearized ADMM-Based Non-Orthogonal Pilot Design for Multi-Cell Massive MIMO Systems
    Y. Wu, S. Ma, and Y. Gu
    [Abstract]

    In this work, we propose a novel non-orthogonal pilot design approach to tackle the pilot contamination problem in multi-cell massive multiple input multiple output (MIMO) systems. Assuming the minimum mean square error (MMSE) estimators are adopted at the base stations (BSs), we design pilot signals under power constraints by minimizing the total mean square errors (MSEs) of the MMSE estimators of all BSs. The pilot design problem is difficult to solve due to the non-convex objective function. A linearized alternating direction method of multipliers (ADMM) algorithm is introduced to solve the above non-convex optimization problem. The linearized ADMM algorithm could approximate the objective function in a linear form, which makes the original problem solvable. In addition, a lower bound on the uplink capacity is derived with maximum ratio detection that gives the insight on how the channel estimation accuracy affects the capacity. Finally, numerical simulations validate that the proposed pilot design approach can achieve higher channel estimation accuracy and uplink achievable sum rate compared to the state-of-the-art approaches with less computational complexity.

  • A Novel Classification Framework with Hypothesis-based Dictionary Learning Model
    L. Meng, D. Lu, and Y. Gu
    [Abstract]

    Dictionary learning methods seek to represent samples on a set of over complete basis. The representations can then be used for classification tasks. When applied to large datasets, traditional dictionary learning models lacks the ability of feature extraction. Recently developed deep dictionary learning models stack dictionary units in multi-layers for better feature extraction capacity. However, the computational cost of these models is very heavy. In this paper, we propose a new hypothesis-based dictionary learning framework, in which samples are classified according to their representations on a structured dictionary. By adopting our hypothesis-based framework, label information is reinforced during both training and testing process. This reinforcement not only enlarges the divergence among sub-dictionaries, but also enforces samples of different categories to exhibit different coding patterns. Furthermore, the proposed framework is much more computational efficient compared to available deep dictionary learning algorithms. Experiments on several standard datasets are presented demonstrating that our proposed framework leads to state-of-the-art performance in terms of speed and efficiency.

Peer-reviewed Publications

2020


  • Rigorous Restricted Isometry Property for Low-Dimensional Subspaces
    G. Li, Q. Liu, and Y. Gu
    Applied and Computational Harmonic Analysis, November 11, 2019
    [ELSEVIER] [Abstract]

    Dimensionality reduction is in demand to reduce the complexity of solving large-scale problems with data lying in latent low-dimensional structures in machine learning and computer version. Motivated by such need, in this work we study the Restricted Isometry Property (RIP) of Gaussian random projections for low-dimensional subspaces in $\mathbb{R}^N$, and rigorously prove that the projection Frobenius norm distance between any two subspaces spanned by the projected data in $\mathbb{R}^n$($n< N$) remain almost the same as the distance between the original subspaces with probability no less than $1 - {\rm e}^{-\mathcal{O}(n)}$. Previously the well-known Johnson-Lindenstrauss (JL) Lemma and RIP for sparse vectors have been the foundation of sparse signal processing including Compressed Sensing. As an analogy to JL Lemma and RIP for sparse vectors, this work allows the use of random projections to reduce the ambient dimension with the theoretical guarantee that the distance between subspaces after compression is well preserved.

    [arXiv] [pdf]
  • Minimum Error Entropy Kalman Filter
    B. Chen, L. Dang, Y. Gu, N. Zheng, and J. C. Principe
    IEEE Transactions on Systems, Man, and Cybernetics: Systems, 20 December, 2019
    [IEEE] [Abstract]

    To date, most linear and nonlinear Kalman filters (KFs) have been developed under the Gaussian assumption and the well-known minimum mean square error (MMSE) criterion. In order to improve the robustness with respect to impulsive (or heavy-tailed) non-Gaussian noises, the maximum correntropy criterion (MCC) has recently been used to replace the MMSE criterion in developing several robust Kalman-type filters. To deal with more complicated non-Gaussian noises such as noises from multimodal distributions, in this article, we develop a new Kalman-type filter, called minimum error entropy KF (MEE-KF), by using the minimum error entropy (MEE) criterion instead of the MMSE or MCC. Similar to the MCC-based KFs, the proposed filter is also an online algorithm with the recursive process, in which the propagation equations are used to give prior estimates of the state and covariance matrix, and a fixed-point algorithm is used to update the posterior estimates. In addition, the MEE extended KF (MEE-EKF) is also developed for performance improvement in the nonlinear situations. The high accuracy and strong robustness of MEE-KF and MEE-EKF are confirmed by experimental results.

  • Correntropy Based Multiview Subspace Clustering
    L. Xing, B. Chen, S. Du, Y. Gu, and N. Zheng
    IEEE Transactions on Cybernetics, November 30, 2019
    [IEEE] [Abstract]

    Multiview subspace clustering, which aims to cluster the given data points with information from multiple sources or features into their underlying subspaces, has a wide range of applications in the communities of data mining and pattern recognition. Compared with the single-view subspace clustering, it is challenging to efficiently learn the structure of the representation matrix from each view and make use of the extra information embedded in multiple views. To address the two problems, a novel correntropy-based multiview subspace clustering (CMVSC) method is proposed in this article. The objective function of our model mainly includes two parts. The first part utilizes the Frobenius norm to efficiently estimate the dense connections between the points lying in the same subspace instead of following the standard compressive sensing approach. In the second part, the correntropy-induced metric (CIM) is introduced to characterize the noise in each view and utilize the information embedded in different views from an information-theoretic perspective. Furthermore, an efficient iterative algorithm based on the half-quadratic technique (HQ) and the alternating direction method of multipliers (ADMM) is developed to optimize the proposed joint learning problem, and extensive experimental results on six real-world multiview benchmarks demonstrate that the proposed methods can outperform several state-of-the-art multiview subspace clustering methods.

  • Probability Density Rank Based Quantization for Convex Universal Learning Machines
    Z. Qin, B. Chen, Y. Gu, N. Zheng, J. C. Principe
    IEEE Transactions on Neural Networks and Learning Systems, September 14, 2019
    [IEEE] [Abstract]

    The distributions of input data are very important for learning machines, such as the convex universal learning machines (CULMs). The CULMs are a family of universal learning machines with convex optimization. However, the computational complexity is a crucial problem in CULMs, because the dimension of the nonlinear mapping layer (the hidden layer) of the CULMs is usually rather large in complex system modeling. In this article, we propose an efficient quantization method called Probability density Rank-based Quantization (PRQ) to decrease the computational complexity of CULMs. The PRQ ranks the data according to the estimated probability densities and then selects a subset whose elements are equally spaced in the ranked data sequence. We apply the PRQ to kernel ridge regression (KRR) and random Fourier feature recursive least squares (RFF-RLS), which are two typical algorithms of CULMs. The proposed method not only keeps the similarity of data distribution between the code book and data set but also reduces the computational cost by using the kd-tree. Meanwhile, for a given data set, the method yields deterministic quantization results, and it can also exclude the outliers and avoid too many borders in the code book. This brings great convenience to practical applications of the CULMs. The proposed PRQ is evaluated on several real-world benchmark data sets. Experimental results show satisfactory performance of PRQ compared with some state-of-the-art methods.

2019


  • Weakly Convex Regularized Robust Sparse Recovery Methods with Theoretical Guarantees
    C. Yang, X. Shen, H. Ma, B. Chen, Y. Gu, and H.C. So
    IEEE Transactions on Signal Processing, 67(19):5046-5061, 2019
    [IEEE] [Abstract]

    Robust sparse signal recovery against impulsive noise is a core issue in many applications. Numerous methods have been proposed to recover the sparse signal from measurements corrupted by various impulsive noises, but most of them either lack theoretical guarantee for robust sparse recovery or are not efficient enough for large-scale problems. To this end, a general optimization problem for robust sparse signal recovery, which includes many existing works as concrete instances, is analyzed by a freshly defined Double Null Space Property (DNSP), and its solution is proved to be able to robustly reconstruct the sparse signal under mild conditions. Moreover, for computational tractability, weakly convex sparsity-inducing penalties are applied to the general problem, and properties of the solution to the resultant non-convex problem are further studied. Based on these properties, an algorithm named Robust Projected Generalized Gradient (RPGG) is devised to solve the weakly convex problem. Theoretical results prove that the sparse signal can be precisely reconstructed by RPGG from compressive measurements with sparse noise or robustly recovered from those with impulsive noise. Meanwhile, simulations demonstrate that RPGG with tuned parameters outperforms other robust sparse recovery algorithms.

  • Efficient Channel Estimation for mmWave MIMO with Transceiver Hardware Impairments
    Y. Wu, Y. Gu, and Z. Wang
    IEEE Transactions on Vehicular Technology, 68(10):9883-9895, 2019
    [IEEE] [Abstract]

    Transceiver hardware impairments (e.g., phase noise, high power amplifier nonlinearities, in-phase/quadrature-phase imbalance, and quantization errors) degrade the performance of channel estimation in millimeter wave (mmWave) multiple-input multiple-output (MIMO) systems. Although compensation methods can be exploited to mitigate the impact of hardware impairments, there always remains residual impairments that will distort the training pilots and received signals. In this paper, we reformulate the channel estimation with transceiver impairments into a sparse recovery problem from a Bayesian perspective and propose an efficient channel estimation algorithm. The proposed algorithm can effectively deal with the perturbation in channel estimation problem caused by the transmitter hardware impairments with small amount computation times. Simulation results demonstrate the superior performance of the proposed algorithm compared to the conventional orthogonal matching pursuit algorithm (OMP) based channel estimation method and Bayesian inference method.

  • Walk Proximal Gradient: An Energy Efficient Algorithm for Consensus Optimization
    X. Mao, Y. Gu, and W. Yin
    IEEE Internet of Things Journal, 6(2):2048-2060, 2019
    [IEEE] [Abstract]

    Decentralized computing is widely used for multi-agent systems since it works without a central computing node. In this paper, we develop a first-order algorithm for decentralized consensus optimization that is more energy efficient than the current state-of-the-art. Our algorithm is suitable for application scenarios such as networks of wireless sensors and Internet of things, where some agents have limited (battery) energy. We call our algorithm Walk Proximal Gradient (WPG), which passes a token through a walk (a succession of nodes) in the graph. The agents that are visited during the walk compute the gradients of their private functions and update the token. We analyze WPG where the walk is the repetition of a Hamiltonian cycle and show that the token converges to the consensual solution faster (in terms of energy consumption) than existing gradient-based decentralized methods. We also generalize the analysis to the non-Hamiltonian graphs. Numerical experiments are presented to validate the energy efficiency of our algorithm.

    [pdf]
  • Linearized ADMM for Nonconvex Nonsmooth Optimization with Convergence Analysis
    Q. Liu, X. Shen, and Y. Gu
    IEEE Access, 7(1): 76131-76144, 2019
    [IEEE] [Abstract]

    Linearized alternating direction method of multipliers (ADMM) as an extension of ADMM has been widely used to solve linearly constrained problems in signal processing, machine leaning, communications, and many other fields. Despite its broad applications in nonconvex optimization, for a great number of nonconvex and nonsmooth objective functions, its theoretical convergence guarantee is still an open problem. In this paper, we propose a two-block linearized ADMM and a multi-block parallel linearized ADMM for problems with nonconvex and nonsmooth objectives. Mathematically, we present that the algorithms can converge for a broader class of objective functions under less strict assumptions compared with previous works. Furthermore, our proposed algorithm can update coupled variables in parallel and work for less restrictive nonconvex problems, where the traditional ADMM may have difficulties in solving subproblems.

    [arXiv] [pdf]
  • Kernel Kalman Filtering with Conditional Embedding and Maximum Correntropy Criterion
    L. Dang, B. Chen, S. Wang, Y. Gu, and J. C. Principe
    IEEE Transactions on Circuits and Systems - I: Regular Papers, 66(11):4265-4277, 2019
    [IEEE] [Abstract]

    The Hilbert space embedding provides a powerful and flexible tool for dealing with the nonlinearity and high-order statistics of random variables in a dynamical system. The kernel Kalman filtering based on the conditional embedding operator (KKF-CEO) shows significant performance improvements over the traditional Kalman filters in the noisy nonlinear time-series prediction. However, KKF-CEO based on the minimum mean-square-error (MMSE) criterion is sensitive to the outliers or heavy-tailed noises. In contrast to the MMSE criterion, the maximum correntropy criterion (MCC) can achieve more robust performance in the presence of outliers. In this paper, we develop a novel kernel Kalman-type filter based on MCC, referred to kernel Kalman filtering with conditional embedding operator and maximum correntropy criterion (KKF-CEO-MCC). The proposed KKF-CEO-MCC can capture higher order statistics of errors and is robust to outliers. In addition, two simplified versions of KKF-CEO-MCC are developed, namely, KKF-CEO-MCC-O and KKF-CEO-MCC-NA. The former is an online approach and the latter is based on Nyström approximation. Simulations on noisy nonlinear time-series prediction confirm the desirable accuracy and robustness of the new filters.

  • Compressive-Sensing-Based Data Aggregation Approaches for Dynamic WSNs
    X. D. Wang, Q. F. Zhou, Y. Gu, and J. Tong
    IEEE Communications Letters, 23(6):1073-1076, 2019
    [IEEE] [Abstract]

    Among various data aggregation approaches proposed for wireless sensor networks (WSNs), the one based on compressive sensing (CS) has the merit of low traffic cost. The key step to design a CS-based data aggregation protocol is to construct a measurement matrix $\Phi$ based on the network structure and to assign each node a unique column vector of $\Phi$. Assuming an expanded scenario where some new nodes join in the network, the data aggregation scheme has to entirely re-generate a new matrix $\Phi'$ with a larger size to meet the node number and CS property simultaneously. Apparently, it is energy-consuming to reallocate the weight vectors from the new measurement matrix to all the nodes. Thus, we propose an approach which aims to keep the weight vectors of existing sensors unchanged but assign only optimized measurement vectors to the newly added nodes. In order to solve the relevant non-convex optimization problem, two efficient methods with good data aggregation performance are proposed. Numeric experiments validate the effectiveness of our proposed methods.

  • Smoothed Sparse Recovery via Locally Competitive Algorithm and Forward Euler Discretization Method
    Q. Liu, Y. Gu, and H. C. So
    Signal Processing, 157:97-102, 2019
    [ELSEVIER] [Abstract]

    This paper considers the problem of sparse recovery whose optimization cost function is a linear combination of a nonsmooth sparsity-inducing term and an $\ell_2$ norm as the metric for the residual error. Since the resultant sparse approximation involves nondifferentiable functions, locally competitive algorithm and forward Euler discretization method are exploited to approximate the nonsmooth objective function, yielding a smooth optimization problem. Alternating direction method of multipliers is then applied as the solver, and Nesterov acceleration trick is integrated for speeding up the computation process. Numerical simulations demonstrate the superiority of the proposed method over several popular sparse recovery schemes in terms of computational complexity and support recovery.

    [pdf]
  • DOA Estimation in Impulsive Noise via Low-Rank Matrix Approximation and Weakly Convex Optimization
    Q. Liu, Y. Gu, and H. C. So
    IEEE Transactions on Aerospace and Electronic Systems, 55(6):3603-3616, 2019
    [IEEE] [Abstract]

    Conventional direction-of-arrival (DOA) estimators are vulnerable to impulsive noise. In this work, to tackle this issue, a class of weakly convex-inducing penalties is introduced for robust DOA estimation via low-rank matrix approximation, where $\ell_{2,1}$-norm is adopted as the metric for suppressing the outliers. Two iterative algorithms are developed to construct the noise-free data matrix. To avoid determining the number of sources, the DOAs are estimated by exploiting the special joint diagonalization structure of the constructed signal covariance matrix. Compared with several existing algorithms, the proposed methods enjoy faster computation, similar DOA estimation performance against impulsive noise and requiring no a priori information of the source number. Numerical experiments are included to demonstrate the outlier-resistance of our solutions.

  • Pilot-Free Channel Change Detection for mmWave Massive MIMO System
    Y. Wu, Y. Jiao, F. Gao, and Y. Gu
    IEEE Global Communications Conference (GLOBECOM), December 9-13, 2019, Waikoloa, HI, USA
    [Abstract]

    Millimeter wave (mmWave) communication is a promising technology for future outdoor cellular systems. However, the abrupt channel change is more prone to occur over mmWave band compared to conventional frequencies, which will degrade the system performance. Hence, it is important to detect the channel change accurately for preparation of re-estimating the channel state information (CSI). In this paper, we attempt to solve the channel change problem from the perspective of change-point detection. By exploiting the low-rank characteristic of mmWave massive multiple-input multiple-output (MIMO) channel, the received signals can be viewed as data points in a subspace superimposed with noises. Then a cumulative sum (CUSUM)-based subspace change-point detection algorithm is designed to detect the mmWave channel change. The method can work without the aid of pilot signals and is of low computational complexity and memory requirement. Results are provided to validate the satisfactory of the proposed method.

  • Why Subspace Clustering Works on Compressed Data?
    L. Meng, G. Li, and Y. Gu
    IEEE International Conference on Signal, Information and Data Processing, December 11-13, 2019, Chongqing, China
    [Abstract]

    Subspace clustering (SC) refers to the problem of clustering unlabeled high-dimensional data into a union of low- dimensional subspaces. In many practical scenarios, one may only have access to the compressed data due to constraints of measurement or computation. In this paper, we introduce a general framework for analyzing the performance of various subspace clustering algorithms when applied to the compressed data. Our framework captures the connection between the problems of compressed subspace clustering (CSC) and noisy subspace clustering. With the existing study of noisy SC, our framework makes it possible to easily extend the results in noisy SC to CSC. In this paper, we apply the framework to a most commonly used sparse subspace clustering (SSC) algorithm and obtain its performance. Finally, the practicability and efficiency of CSC are verified by numerical experiments.

  • Properties of Bernoulli Random Projection for Embedding Subspaces
    Q. Liu, G. Li, and Y. Gu
    IEEE International Conference on Signal, Information and Data Processing, December 11-13, 2019, Chongqing, China
    [Abstract]

    In our early work, we proved that the affinity (distance) between two subspaces after Gaussian random projec- tion will concentrate around an estimation with overwhelming probability . In this work, we extend the previous results to the case of Bernoulli random projection, which is essentially different from the Gaussian case because it lacks the property of rotation invariance. Specifically, we prove that the affinity between two subspaces after Bernoulli random projection will concentrate around an estimation with overwhelming probability. Our works are different from the previous works in RIP (Restricted Isometry Property), because we studied distance (affinity) between linear subspaces before and after random projection, while the previous works mainly studied the distance between points.

  • Information Theoretic Lower Bound of Restricted Isometry Property Constant
    G. Li, J. Yan, and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 5297-5301, May 12-17, 2019, Brighton, UK 
    [IEEE] [Abstract]

    Compressed sensing seeks to recover an unknown sparse vector from undersampled rate measurements. Since its introduction, there have been enormous works on compressed sensing that develop efficient algorithms for sparse signal recovery. The restricted isometry property (RIP) has become the dominant tool used for the analysis of exact reconstruction from seemingly undersampled measurements. Although the upper bound of the RIP constant has been studied extensively, as far as we know, the result is missing for the lower bound. In this work, we first present a tight lower bound for the RIP constant, filling the gap there. The lower bound is at the same order as the upper bound for the RIP constant. Moreover, we also show that our lower bound is close to the upper bound by numerical simulations. Our bound on the RIP constant provides an information-theoretic lower bound about the sampling rate for the first time, which is the essential question for practitioners.

  • An Efficient Algorithm for Hyperspectral Image Clustering
    Y. Pan, Y. Jiao, T. Li, and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2167-2171, May 12-17, 2019, Brighton, UK 
    [IEEE] [Abstract]

    Hyperspectral images (HSIs) clustering problem is a challenge and valuable task due to its inherent complexity and abundant spectral information. Sparse subspace clustering (SSC) and SSC-based methods are widely used in this problem and demonstrate excellent performance. However, considering that HSIs are usually of high dimension, these methods have expensive computing complexity because of the usage of SSC. To solve this problem, we propose a novel approach called SuperPixel and Angle-based HyperSpectral Image Clustering (SPAHSIC). It first extracts the local spectral and spatial information between pixels by superpixel segmentation, and then applies spectral clustering on the similarity matrix built based on subspace principal angles. We implement experiments on real datasets and get a high accuracy, which indicates the effectiveness of our algorithm.

  • A Block Sparsity Based Channel Estimation Technique for mmWave Massive MIMO with Beam Squint Effect
    M. Wang, F. Gao, Y. Gu, M. F. Flanagan
    IEEE International Conference on Communications (ICC), May 20-24, 2019, Shanghai, China 
    [IEEE] [Abstract]

    Multiple-input multiple-output (MIMO) millimeter wave (mmWave) communication is a key technology for next generation wireless networks. As the number of antennas becomes larger and the transmission bandwidth becomes wider, the array steering vectors would vary at different subcarriers, causing the beam squint effect. In this case, the conventional channel model is no longer applicable, especially for the mmWave massive MIMO system. In this paper, we first explain the influence of the beam squint effect from the array signal processing perspective and then investigate the angle-delay sparsity of mmWave transmission. We next design a compressive sensing (CS) algorithm based on shift-invariant block-sparsity that can jointly compute the off-grid angles, the off-grid delays, and the complex gains of the multi-path channel. Compared to either the conventional channel model, or the existing on-grid algorithms, the proposed one more accurately reflects the mmWave channel and is shown to yield better performance of uplink channel estimation.

  • Enhanced Streaming Based Subspace Clustering Applied to Acoustic Scene Data Clustering
    S. Li, Y. Gu, Y. Luo, J. Chambers, and W. Wang
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 11-15, May 12-17, 2019, Brighton, UK 
    [IEEE] [Abstract]

    Labelled data are often required to train an acoustic scene classification system. However, it is time-consuming and expensive to label the data manually. An unsupervised clustering algorithm can be used to facilitate the labelling process by dividing the acoustic data into different categories. Nevertheless, it can be problematic to run a clustering algorithm with growing data volume and dimension due to the sharp increase in the computational and memory costs. We propose a new streaming based subspace clustering algorithm which allows the data to be clustered on the fly, and also resolves data points in the overlapping regions of two subspaces by augmenting the learned low-rank representation with the original data samples. Experimental results show that our method can achieve the clustering objective for overwhelmingly high-volume data in an online fashion, while retaining good accuracy and reducing the memory cost significantly.

2018


  • A General Framework for Understanding Compressed Subspace Clustering Algorithms
    L. Meng, G. Li, J. Yan, and Y. Gu
    IEEE Journal of Selected Topics in Signal Processing, 12(6), December, 2018
    [IEEE] [Abstract]

    Subspace clustering (SC) refers to the problem of clustering unlabeled high-dimensional data into a union of low-dimensional linear subspaces. In many practical scenarios, one may only have access to the compressed data due to constraints of measurement or computation. In this paper, based on the recently proposed restricted isometric property (RIP) of Gaussian random projection for low-dimensional subspaces, we propose a general framework for analyzing the performance of various subspace clustering algorithms when applied to the compressed data. Our framework captures the connection between the problem of compressed subspace clustering (CSC) and the problem of noisy subspace clustering. With the existing study of noisy SC, our framework makes it possible to easily extend the results in noisy SC to CSC. In this paper, we select the most commonly used subspace clustering algorithms, i.e. sparse SC (SSC), SSC-OMP, and thresholding based SC (TSC), as representatives and analyze their performance using the proposed framework. Our results are consistent with related work obtained by previous researchers, with our methodology being much more direct and universal. Finally, the practicability and efficiency of CSC are verified by numerical experiments.

    [pdf]
  • Subspace Change-Point Detection: A New Model and Solution
    Y. Jiao, Y. Chen, and Y. Gu
    IEEE Journal of Selected Topics in Signal Processing, 12(6), December 2018
    [IEEE] [Abstract]

    Change-point detection has a long history of research in statistical signal processing, and remains a fundamental problem in many real-world applications involving information extraction from streaming data. Exploiting low-dimensional structures (subspace in particular) of high-dimensional signals is another research topic of significance, as it improves not only efficiency in computation, storage, and communication, but also interpretability and understanding of data. In this paper, we formulate the problem of subspace change-point detection, where a stream of high-dimensional data points lie on a low-dimensional subspace that abruptly changes at a certain time instant. Then, we propose a cumulative sum (CUSUM)-based algorithm for the task of detection in this problem setup. From algorithmic perspective, the proposed method is computationally efficient, as it proceeds in an online and recursive manner, and each iteration involves simple matrix computation; it is also practical and flexible, as it works under mild conditions on the distribution of data points after change-point. From theoretical perspective, we derive rigorous performance bounds of the proposed algorithm, in terms of false-alarm rate and detection delay. We also conduct extensive numerical experiments on both synthetic and real-world data to validate effectiveness of our algorithm and correctness of theoretical analysis.

    [pdf] [code] [data1] [data2] [data3]
  • Spatio-Temporal Signal Recovery Based on Low Rank and Differential Smoothness
    X. Mao, K. Qiu, T. Li, and Y. Gu
    IEEE Transactions on Signal Processing, 66(23):6281-6296, 2018
    [IEEE] [Abstract]

    The analysis of spatio-temporal signals plays an important role in various fields including sociology, climatology, and environmental studies, etcetera. Due to the abrupt breakdown of the sensors in the sensor network, always there are missing entries in the observed spatio-temporal signals. In this paper, we study the problem of recovering spatio-temporal signals from partially known entries. Based on both the global and local correlated property of spatio-temporal signals, we propose a Low Rank and Differential Smoothness-based recovery method (LRDS), which novelly introduces the differential smooth prior of time-varying graph signals to the field of spatio-temporal signal analysis. The performance of the proposed method is analyzed theoretically. Considering the case where a priori information about the signal's global pattern is available, we propose Prior LRDS (PLRDS) to further improve the reconstruction accuracy. Such improvement is also verified by synthetic experiments. Besides, experiments on several real-world datasets demonstrate the improvement on recovery accuracy of the proposed LRDS over the state-of-the-art spatio-temporal signal recovery methods.

    [pdf] [code] [data]
  • Sparse Recovery Conditions and Performance Bounds for Lp-Minimization
    C. Yang, X. Shen, H. Ma, Y. Gu, and H. C. So
    IEEE Transactions on Signal Processing, 66(19):5014-5028, 2018
    [IEEE] [Abstract]

    In sparse recovery, a sparse signal ${\bf x}\in \mathbb{R}^N$ with $K$ non-zero entries is to be reconstructed from a compressed measurement $\bf y=Ax$ with ${\bf A}\in \mathbb{R}^{M\times N}$ ($M < N$). The $\ell_p$ $(0\le p <1)$ pseudo norm has been found to be a sparsity inducing function superior to the $\ell_1$ norm, and the Null Space Constant (NSC) and Restricted Isometry Constant (RIC) have been used as key notions in the performance analyses of the corresponding $\ell_p$-minimization. In this work, we study sparse recovery conditions and performance bounds for the $\ell_p$-minimization. We devise a new NSC upper bound which outperforms the state-of-the-art result. Based on the improved NSC upper bound, we provide a new RIC upper bound dependent on the sparsity level $K$ as a sufficient condition for precise recovery, and it is tighter than the existing bound for small $K$. Then we study the largest choice of $p$ for the $\ell_p$-minimization problem to recover any $K$-sparse signal, and the largest recoverable $K$ for a fixed $p$. Numerical experiments demonstrate the improvement of the proposed bounds in the recovery conditions over the up-to-date counterparts.

    [pdf]
  • Nonconvex Sparse Logistic Regression with Weakly Convex Regularization
    X. Shen and Y. Gu
    IEEE Transactions on Signal Processing, 66(12):3199-3211, June 2018
    [IEEE] [Abstract]

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding problem, and study its local optimality conditions and the choice of the regularization parameter. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a local optimality condition and a bound on the logistic loss at a local optimum are provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and a Nesterov acceleration is used with a convergence guarantee. Its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

    [pdf] [arXiv] [code]
  • Robust Sparse Recovery via Weakly Convex Optimization in Impulsive Noise
    Q. Liu, C. Yang, Y. Gu, and H. C. So
    Signal Processing, 152:84-89, 2018
    [ELSEVIER] [Abstract]

    We propose a robust sparse recovery formulation in impulsive noise, where l1 norm as the metric for the residual error and a class of weakly convex functions for inducing sparsity are employed. To solve the corresponding nonconvex and nonsmooth minimization, a slack variable is introduced to guarantee the convexity of the equivalent optimization problem in each block of variables. An efficient algorithm is developed for minimizing the surrogate Lagrangian based on the alternating direction method of multi- pliers. Model analysis guarantees that this novel robust sparse recovery formulation guarantees to attain the global optimum. Compared with several state-of-the-art algorithms, our method attains better recov- ery performance in the presence of outliers.

    [pdf]
  • Restricted Isometry Property of Gaussian Random Projection for Finite Set of Subspaces
    G. Li and Y. Gu
    IEEE Transactions on Signal Processing, 66(7):1705-1720, April 2018
    [IEEE] [Abstract]

    Dimension reduction plays an essential role when decreasing the complexity of solving large-scale problems. The well-known Johnson-Lindenstrauss (JL) Lemma and Restricted Isometry Property (RIP) admit the use of random projection to reduce the dimension while keeping the Euclidean distance, which leads to the boom of Compressed Sensing and the field of sparsity related signal processing. Recently, successful applications of sparse models in computer vision and machine learning have increasingly hinted that the underlying structure of high dimensional data looks more like a union of subspaces (UoS). In this paper, motivated by JL Lemma and an emerging field of Compressed Subspace Clustering (CSC), we study for the first time the RIP of Gaussian random matrices for the compression of two subspaces based on the generalized projection $F$-norm distance. We theoretically prove that with high probability the \emph{affinity} or \emph{distance} between two projected subspaces are concentrated around their estimates. When the ambient dimension after projection is sufficiently large, the affinity and distance between two subspaces almost remain unchanged after random projection. Numerical experiments verify the theoretical work.

    [pdf] [arXiv]
  • A RIP-Based Performance Guarantee of Covariance-Assisted Matching
    J. Wang, G. Li, L. Rencker, W. Wang, and Y. Gu
    IEEE Signal Processing Letters, 25(6):828-832, June 2018
    [IEEE] [Abstract]

    An OMP-like Covariance-Assisted Matching Pursuit (CAMP) method has recently been proposed. Given a prior-knowledge of the covariance and mean of the sparse coefficients, CAMP balances the least squares estimator and the prior-knowledge by leveraging the Gauss-Markov theorem. In this letter, we study the performance of CAMP in the framework of restricted isometry property (RIP). It is shown that under some conditions on RIP and the minimum magnitude of the nonzero elements of the sparse signal, CAMP with sparse level $K$ can recover the exact support of the sparse signal from noisy measurements. $l_2$ bounded noise and Gaussian noise are considered in our analysis. We also discuss the extreme conditions of noise (e.g. the noise power is infinite) to simply show the stability of CAMP.

    [pdf]
  • Active Orthogonal Matching Pursuit for Sparse Subspace Clustering
    Y. Chen, G. Li, and Y. Gu
    IEEE Signal Processing Letters, 25(2):164 - 168, Feb. 2018
    [IEEE] [Abstract]

    Sparse Subspace Clustering (SSC) is a state-of-the-art method for clustering high-dimensional data points lying in a union of low-dimensional subspaces. However, while $\ell_1$ optimization-based SSC algorithms suffer from high computational complexity, other variants of SSC, such as Orthogonal Matching Pursuit-based SSC (OMP-SSC), lose clustering accuracy in pursuit of improving time efficiency. In this letter, we propose a novel Active OMP-SSC, which improves clustering accuracy of OMP-SSC by adaptively updating data points and randomly dropping data points in the OMP process, while still enjoying the low computational complexity of greedy pursuit algorithms. We provide heuristic analysis of our approach, and explain how these two active steps achieve a better tradeoff between connectivity and separation. Numerical results on both synthetic data and real-world data validate our analyses and show the advantages of the proposed active algorithm.

    [pdf] [code]
  • Channel Estimation for mmWave MIMO with Transmitter Hardware Impairments
    Y. Wu, Y. Gu, and Z. Wang
    IEEE Communications Letters, 22(2):320 - 323, Feb. 2018
    [IEEE] [Abstract]

    This paper considers the problem of channel estimation for millimeter wave (mmWave) multiple input multiple output (MIMO) systems under a transmitter impairments model. Specifically, taking the transmitter hardware impairments into account, the performance of conventional pilots-based channel estimation scheme will be degraded due to the destroyed training pilots. By exploiting the sparsity of mmWave channel in angular domain, a new channel estimation algorithm based on the Bayesian compressive sensing (BCS) and least square estimation (LSE) is proposed. First, the expectation maximization (EM) algorithm is presented to solve the BCS problem, and the refined measurement matrix and the support of the channel vector are obtained. Next, the channel gain coefficients are estimated by using the LSE. Simulation results show that the proposed algorithm can achieve better performance compared to the conventional BCS and orthogonal matching pursuit algorithm (OMP).

    [pdf] [code]
  • Restricted Ismoetry Property for Low-dimensional Subspaces and Its Application in Compressed Subspace Clutering
    G. Li, Q. Liu, and Y. Gu
    IEEE Data Science Workshop (DSW), 86-90, June 4-6, 2018, EPFL, Lausanne, Switzerland
    [IEEE] [Abstract]

    Utilizing random matrix to reduce the dimension of data has become an attractive method in signal processing and machine learning since the boom of Compressed Sensing. One important example is compressed subspace clustering (CSC), a powerful unsupervised learning algorithm, which performs subspace clustering after random projection. In our previous work, motivated by the importance of affinity in CSC and the conjecture about whether the similarity (distance) between any two given subspaces can remain almost unchanged after random projection, we first prove the restricted isometry property of Gaussian random matrix for compressing subspaces, providing strong theoretical guarantee for the performance of CSC. However, the estimated probability bound in that work doesn't match well with the forms of RIP in other fields, e.g. compressed sensing, because the analysis skills we use are too coarse. To address this issue, we rigorously derive a nearly optimal probability bound in this paper, which can provide a more solid theoretical foundation for CSC and other subspace related problems.

  • Subspace Data Visualization with Dissimilarity Based on Principal Angle
    X. Shen, Y. Jiao, and Y. Gu
    IEEE Data Science Workshop (DSW), accepted, June 4-6, 2018, EPFL, Lausanne, Switzerland
    [IEEE] [Abstract]

    In this work, we aim to visualize data points distributing on a union of subspaces in a two or three dimensional plot to demonstrate both the pairwise angles in the dataset and the relation among the latent subspaces. Our main contribution is a definition of dissimilarity among data, which is based on both the angles between every pair of data points and the principal angles between every data point and each of these intrinsic subspaces. The defined dissimilarity depicts both the inter-class and the intra-class angular structures, and is proved to be a semi-metric with a relaxed triangle inequality. The intrinsic subspaces are obtained in the preprocessing step by subspace clustering, and the defined dissimilarity matrix can be directly visualized by multidimensional scaling. The effectiveness is verified in numerical experiments on both synthetic data and real datasets of facial images and human motions, which are visualized into several subspace clusters, and both the sequence order within every cluster and the relation among different clusters are well illustrated.

  • Subspace Principal Angle Preserving Property of Gaussian Random Projection
    Y. Jiao, X. Shen, G. Li, and Y. Gu
    IEEE Data Science Workshop (DSW), 115-119, June 4-6, 2018, EPFL, Lausanne, Switzerland
    [IEEE] [Abstract]

    With the increase of the dimension of data, dimensionality reduction, such as random projection, becomes necessary. In many cases, the underlying structure of data is a union of subspaces, and the similarity between subspaces can be well described by principal angles. Motivated by the conjecture whether the principal angles between subspaces can remain almost unchanged after Gaussian random projection, in this work, we prove that the principal angle preserving property holds with probability exponentially close to $1$. We use more advanced techniques compared with our previous work, and the improved conclusion is more rigorous in theory and more useful in practice. Principal angles are so essential in the relation between subspaces that many works on algorithms for data mining regarding subspaces are based on principal angles, so this work may contribute to extending them to compressive scenarios, in which the computational complexity can be fundamentally reduced due to the random dimensionality reduction. Experiments on real datasets verify that with a compression ratio as small as $0.05$ the principal angles between subspaces after random projection can be sufficiently close to the original principal angles.

  • A Novel Backbone Network Anomaly Detector via Clustering in Sketch Space
    Y. Liu and Y. Gu
    IEEE Data Science Workshop (DSW), 31-35, June 4-6, 2018, EPFL, Lausanne, Switzerland
    [IEEE] [Abstract]

    Network anomaly detection refers to automatically identifying the flows associated with anomalous events, which is the first line of defense against attacks in network security system. This paper proposes a new reliable unsupervised detector Multiple Sketches and Clustering (MSC) combining sketches (random projections) and clustering to blindly identify anomalies without relying on signatures or labeled traffic data. Multiple random projections and voting strategy alleviate the potential misclassification of single analysis and ensure a lower false positive rate. The K-means++ clustering detection detects both known and unknown anomalies accurately and triggers a higher true positive rate without any prior information about traffic data or anomaly patterns. The results on the MAWILAB backbone network dataset reveal that the novel detector outperforms the state-of-the-art detectors.

  • Outage Probability Conjecture Does Not Hold for Two-Input-Multiple-Output (TIMO) System
    G. Li, J. Yan, and Y. Gu
    IEEE International Symposium on Information Theory (ISIT), 1345-1349, June 17-22, 2018, Colorado, USA
    [IEEE] [Abstract]

    Multiple-Input-Multiple-Output (MIMO) communication systems have seen wide application due to its performance benefits such as multiplexing gain. For MIMO systems with non-ergodic Gaussian channel, a conjecture regarding its outage probability has been proposed by Telatar in [1] and long considered true. A special single-output case of this conjecture has been proved theoretically. In this work, we address the special Two-Input-Multiple-Output (TIMO) case, and show that the conjecture is untrue. A concrete counter-example is proposed and verified both theoretically and by numerical experiments. This result rejects the decades-long conjecture and provides interesting insight into the symmetry of MIMO systems.

    [pdf] [code]
  • Convergence Analysis on A Fast Iterative Phase Retrieval Algorithm without Independence Assumption
    G. Li, Y. Jiao, and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 4624-4628, April 15-20, 2018, Calgary, Canada 
    [IEEE] [Abstract]

    Phase retrieval has been an attractive problem, and many theoretical works on the convergence behavior of iterative algorithms for phase retrieval have been proposed. However, they all assume that the iteratively updated variables and the original data are independent, which is not true in reality. In this paper, we study for the first time the convergence analysis of a fast iterative method for phase retrieval in the real case, where only a finite number of measurements are available. Specifically, we theoretically prove the linear rate of convergence for the phase retrieval via randomized Kaczmarz algorithm without independence assumption.

    [pdf] [poster]
  • Change-point Detection of Gaussian Graph Signal with Partial Information
    Y. Chen, X. Mao, D. Ling, and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 3934-3938, April 15-20, 2018, Calgary, Canada 
    [IEEE] [Abstract]

    In a change-point detection problem, a sequence of signals switches from one distribution to another at an unknown time step, and the goal is to quickly and reliably detect this change. By providing new insight into signal processing and data analysis, graph signal processing promises various applications including image processing and sensor network analysis, and becomes an emerging field of research. In this work, we formulate the problem of change-point detection on graph. Under the reasonable assumption of normality, we propose a CUSUM-based algorithm for change-point detection with an arbitrary, unknown and perhaps time-varying mean shift after the change-point. We further propose a decentralized, distributed algorithm, which requires no fusion center, to reduce computational complexity, as well as costs and delays of communication. Numerical results on both synthetic and real-world data demonstrate that our algorithms are efficient and accurate.

    [pdf] [slides] [code]
  • A Joint Detection and Reconstruction Method for Blind Graph Signal Recovery
    X. Mao and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 4184-4188, April 15-20, 2018, Calgary, Canada 
    [IEEE] [Abstract]

    Sampling and reconstruction is a fundamentally important problem in the field of graph signal processing. Many works have been contributed to reconstructing bandlimited signals from measurements taken on a known subset of vertices. However, in some cases, the vertex defects occur randomly over the graph. In such situation, the existing graph signal reconstruction methods fail to deal with such blind reconstruction problem. In this paper, we formulate the blind reconstruction problem as Mixed-Integer Nonlinear Programming, and propose a Joint Detection and Reconstruction (JDR) method to simultaneously detect the vertices' working states and reconstruct the bandlimited signal. The convergence property of the proposed method is analyzed. In the experimental part, both synthetic dataset and real-world dataset are applied to verify the proposed methods.

    [pdf] [poster]
  • Nonconvex Sparse Logistic Regression via Proximal Gradient Descent
    X. Shen and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 4079-4083, April 15-20, 2018, Calgary, Canada 
    [IEEE] [Abstract]

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\ell_1$ norm. For a class of weakly convex sparsity inducing functions, despite the nonconvexity, the algorithm proposed to solve the problem is based on proximal gradient descent, which allows the use of convergence acceleration techniques and stochastic gradient. Then the general framework is applied to a specific weakly convex function, and the solution method is instantiated as an iterative firm-shrinkage algorithm, of which the effectiveness is demonstrated in numerical experiments.

    [pdf] [slides]

2017


  • Time-Varying Graph Signal Reconstruction
    K. Qiu. X. Mao, X. Shen, X. Wang, T. Li, and Y. Gu
    IEEE Journal of Selected Topics in Signal Processing, 11(6):870 - 883, Sept. 2017
    [IEEE] [Abstract]

    Signal processing on graphs is an emerging research field dealing with signals living on an irregular domain that is captured by a graph, and has been applied to sensor networks, machine learning, climate analysis, et cetera. Existing works on sampling and reconstruction of graph signals mainly studied static bandlimited signals. However, many real-world graph signals are time-varying, and they evolve smoothly, so instead of the signals themselves being bandlimited or smooth on graph, it is more reasonable that their temporal differences are smooth on graph. In this paper, a new batch reconstruction method of time-varying graph signals is proposed by exploiting the smoothness of the temporal difference signals, and the uniqueness of the solution to the corresponding optimization problem is theoretically analyzed. Furthermore, driven by practical applications faced with real-time requirements, huge size of data, lack of computing center, or communication difficulties between two non-neighboring vertices, an online distributed method is proposed by applying local properties of the temporal difference operator and the graph Laplacian matrix. Experiments on a variety of synthetic and real-world datasets demonstrate the excellent performance of the proposed methods.

  • Cross-label Suppression: A Discriminative and Fast Dictionary Learning with Group Regularization
    X. Wang and Y. Gu
    IEEE Transactions on Image Processing, 26(8):3859 - 3873, May 10, 2017
    [IEEE] [Abstract]

    This paper addresses image classification through learning a compact and discriminative dictionary efficiently. Given a structured dictionary with each atom (columns in the dictionary matrix) related to some label, we propose cross-label suppression constraint to enlarge the difference among representations for different classes. Meanwhile, we introduce group regularization to enforce representations to preserve label properties of original samples, meaning the representations for the same class are encouraged to be similar. Upon the cross-label suppression, we don't resort to frequently-used $\ell_0$-norm or $\ell_1$-norm for coding, and obtain computational efficiency without losing the discriminative power for categorization. Moreover, two simple classification schemes are also developed to take full advantage of the learnt dictionary. Extensive experiments on six data sets including face recognition, object categorization, scene classification, texture recognition and sport action categorization are conducted, and the results show that the proposed approach can outperform lots of recently presented dictionary algorithms on both recognition accuracy and computational efficiency.

    [arXiv] [pdf] [code]
  • Off-grid DOA estimation with nonconvex regularization via joint sparse representation
    Q. Liu, H. C. So, and Y. Gu
    Signal Processing, 140:171-176, Nov. 2017
    [ELSEVIER] [Abstract]

    In this paper, we address the problem of direction-of-arrival (DOA) estimation using sparse representation. As the performance of on-grid DOA estimation methods will degrade when the unknown DOAs are not on the angular grids, we consider the off-grid model via Taylor series expansion, but dictionary mismatch is introduced. The resulting problem is nonconvex with respect to the sparse signal and perturbation matrix. We develop a novel objective function regularized by the nonconvex sparsity-inducing penalty for off-grid DOA estimation, which is jointly convex with respect to the sparse signal and perturbation matrix. Then alternating minimization is applied to tackle this joint sparse representation of the signal recovery and perturbation matrix. Numerical examples are conducted to verify the effectiveness of the proposed method, which achieves more accurate DOA estimation performance and faster implementation than the conventional sparsity-aware and state-of-the-art off-grid schemes.

    [pdf]
  • Principal Angles Preserving Property of Gaussian Random Projection for Subspaces
    Y. Jiao, G. Li, and Y. Gu
    IEEE Global Conference on Signal and Information Processing (GlobalSIP), 318-322, Nov. 14-16, 2017, Montreal, Canada
    [pdf] [Abstract]

    With the complexity of solving large-scale problems increasing, dimension reduction appears more and more important. In the early work, we find that with high probability the affinity and distance between two subspaces will be concentrated around their estimates after Gaussian random projection. When the ambient dimension after projection is sufficiently large, the affinity and distance between two subspaces almost remain unchanged after projection. In this paper, we extend the above results to principal angles (or canonical angles) to reveal the relationship between two subspaces from a more fundamental viewpoint. We theoretically prove that each principal angle will also be concentrated around its estimate with high probability. When the ambient dimension after projection is sufficiently large, it also remains almost unchanged. Numerical simulation verifies the theoretical work.

    [code]
  • Sparse Subspace Clustering using Square-root Penalty
    L. Meng, X. Shen, and Y. Gu,
    IEEE Conference on Digital Signal Processing (DSP), 1-5, Aug. 23-25, 2017, London, UK 
    [IEEE] [Abstract]

    We study the sparse subspace clustering problem in presence of both sparse outliers and Gaussian additive noise based on data sparse self-representation. We propose a convex optimization problem which does not only induce sparsity on the representation coefficients and the outliers, but also adopts a square-root penalty to improve the robustness against Gaussian noise. An algorithm based on alternating direction method of multipliers (ADMM) is then devised as a solver for the proposed problem. As a real application, the proposed model and algorithm are applied in motion segmentation. The performances are demonstrated and analyzed by synthetic data, and more importantly, the effectiveness is verified by some real data. Compared with the reference method, numerical results show that the new method achieves higher cluster accuracy and that the choice of the parameter can be less sensitive to the noise level.

    [pdf] [code]
  • Disciplined multi-convex programming
    X. Shen, S. Diamond, M. Udell, Y. Gu, S. Boyd
    Chinese Control And Decision Conference (CCDC), 895 - 900, May 28-30, 2017, Chongqing, China
    [IEEE] [Abstract]

    A multi-convex optimization problem is one in which the variables can be partitioned into sets over which the problem is convex when the other variables are fixed. Multi-convex problems are generally solved approximately using variations on alternating or cyclic minimization. Multi-convex problems arise in many applications, such as nonnegative matrix factorization, generalized low rank models, and structured control synthesis, to name just a few. In most applications to date the multi-convexity is simple to verify by hand. In this paper we study the automatic detection and verification of multi-convexity using the ideas of disciplined convex programming. We describe an implementation of our proposed method that detects and verifies multi-convexity, and then invokes one of the general solution methods.

    [arXiv]
  • Distance-preserving property of random projection for subspaces
    G. Li and Y. Gu,
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 3959-3693, March 5-9, 2017, New Orleans, USA 
    [IEEE] [Abstract]

    Dimension reduction plays an essential role when decreasing the complexity of solving large-scale problems. The well-known Johnson-Lindenstrauss (JL) Lemma and Restricted Isometry Property (RIP) admit the use of random projection to reduce the dimension while keeping the Euclidean distance, which leads to the boom of sparsity related signal processing. Recently, successful applications of sparse models in computer vision and machine learning have increasingly hinted that the underlying structure of high dimensional data looks more like a union of subspaces (UoS). In this paper, motivated by JL Lemma, we study for the first time the distance-preserving property of Gaussian random projection matrices for two subspaces based on knowledge of Grassmann manifold. We theoretically prove that with high probability the \emph{affinity} or the \emph{distance} between two compressed subspaces are concentrated on their estimates. Numerical experiments verify the theoretical work.

    [pdf]

2016


  • Local Measurement and Reconstruction for Noisy Bandlimited Graph Signals
    X. Wang, J. Chen, and Y. Gu
    Signal Processing, 129: 119-129, Dec. 2016
    [ELSEVIER] [Abstract]

    Signals and information related to networks can be modeled and processed as graph signals. It has been shown that if a graph signal is smooth enough to satisfy certain conditions, it can be uniquely determined by its decimation on a subset of vertices. However, instead of the decimation, sometimes local combinations of signals on different sets of vertices are obtained in potential applications such as sensor networks with clustering structures. In this work, a generalized sampling scheme is proposed based on local measurement, which is a linear combination of signals associated with local vertices. It is proved that bandlimited graph signals can be perfectly reconstructed from the local measurements through a proposed iterative local measurement reconstruction (ILMR) algorithm. Some theoretical results related to ILMR including its convergence and denoising performance are given. Then the optimal partition of local sets and local weights are studied to minimize the error bound. It is shown that in noisy scenarios the proposed local measurement scheme is more robust than the traditional decimation scheme.

    [arXiv] [pdf]
  • Square-root Lasso with Non-convex Regularization: An ADMM Approach
    X. Shen, L. Chen, Y. Gu, and H. C. So
    IEEE Signal Processing Letters, 23(7):934-938, July 2016
    [IEEE] [Abstract]

    Square-root least absolute shrinkage and selection operator (Lasso), a variant of Lasso, has recently been proposed with a key advantage that the optimal regularization parameter is independent of the noise level in the measurements. In this letter, we introduce a class of nonconvex sparsity-inducing penalties to the square-root Lasso to achieve better sparse recovery performance over the convex counterpart. The resultant formulation is converted to a nonconvex but multiconvex optimization problem, i.e., it is convex in each block of variables. Alternating direction method of multipliers is applied as the solver, according to which two efficient algorithms are devised for row-orthonormal sensing matrix and general sensing matrix, respectively. Numerical experiments are conducted to evaluate the performance of the proposed methods.

    [pdf]
  • Robust Transmit Beamforming for Parameter Estimation Using Distributed Sensors
    J. Zhu, R. S. Blum, X. Lin, and Y. Gu
    IEEE Communications Letters, 20(7):1329 - 1332, July 2016
    [IEEE] [Abstract]

    In this letter, we study the problem of estimating a complex parameter in a wireless sensor network under individual power constraints and bounded channel uncertainty, where the channel state information is assumed to be imperfect at both the sensors and the fusion center. Transmit beamformers and a linear estimator that minimize the maximum mean square error are found. Although this problem is nonconvex, it can be relaxed to become a semidefinite programming problem by re-parameterization and semidefinite relaxation. Surprisingly, we find that relaxation is tight and a global optimal solution can be found. Finally, numerical simulations are performed to evaluate the performance of the robust estimator.

    [pdf]
  • Disciplined Convex-Concave Programming
    X. Shen, S. Diamond, Y. Gu, and S. Boyd
    IEEE Conference on Decision and Control (CDC), 1009-1014, Dec. 12-14, 2016, Las Vegas, USA
    [IEEE] [arXiv]
  • Out-of-label suppression dictionary learning with cluster regularization
    X. Wang and Y. Gu
    IEEE Global Conference on Signal and Information Processing (GlobalSIP), 307-311, Dec. 7-9, 2016, Washington, USA
    [IEEE] [Abstract]

    This paper addresses the problem of learning a discriminative dictionary from training signals. Given a structured dictionary, each atom of which has its corresponding label, one signal should be mainly constructed by its closely associated atoms. Besides the representations for the same class ought to be very close to form a cluster. Thus we present out-of-label suppression dictionary model with cluster regularization to amplify the discriminative power. Upon out-of-label suppression, we don't adopt $l_0$-norm or $l_1$-norm for regularization. Meanwhile, two simple classifier are developed to take full advantage of the learnt dictionary. The effectiveness of the proposed dictionary model has been evaluated on two popular visual benchmarks.

  • Graph-based Reconstruction of Time-varying Spatial Signals
    K. Qiu, X. Wang, T. Li, and Y. Gu
    IEEE International Conference on Digital Signal Processing (DSP), 355 - 359, Oct. 16-18, 2016, Beijing, China
    [IEEE] [pdf] [slides]
  • Performance Estimation of Sparse Signal Recovery Under Bernoulli Random Projection with Oracle Information
    R. Song, L. Chen, and Y. Gu
    IEEE International Conference on Digital Signal Processing (DSP), 695 - 699, Oct. 16-18, 2016, Beijing, China
    [IEEE] [pdf] [slides]
  • Improving the reconstruction accuracy of MR imaging using Zero-point Attracting Projection
    K. Qiu, H. Guo, and Y. Gu
    Chinese Control Conference (CCC), 5212-5217, July 27-29, 2016, Chengdu, China
    [IEEE] [pdf] [slides]
  • Beyond Union of Subspaces: Subspace Pursuit on Grassmann Manifold for Data Representation
    X. Shen, Y. Gu, and H. Krim
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 4079-4083, March 20-25, 2016, Shanghai, China
    [IEEE]

2015


  • Robustness of Sparse Recovery via F-minimization: A Topological Viewpoint
    J. Liu, J. Jin, and Y. Gu
    IEEE Transactions on Information Theory, 61(7):3996-4014, July 2015
    [IEEE] [pdf]
  • Local-set-based Graph Signal Reconstruction
    X. Wang, P. Liu, and Y. Gu
    IEEE Transactions on Signal Processing, 63(9):2432-2444, May 1, 2015
    [IEEE] [arXiv] [pdf]
  • A Distributed Tracking Algorithm for Reconstruction of Graph Signals
    X. Wang, M. Wang, and Y. Gu
    IEEE Journal of Selected Topics in Signal Processing, 9(4):728-740, June 2015
    [IEEE] [arXiv] [pdf]
  • Block-Sparsity-Induced Adaptive Filter for Multi-Clustering System Identification
    S. Jiang and Y. Gu
    IEEE Transactions on Signal Processing, 63(20):5318-5330, Oct. 15, 2015
    [IEEE] [arXiv] [pdf] [code]
  • Parameter Estimation from Quantized Observations in Multiplicative Noise Environments
    J. Zhu, X. Lin, R. S. Blum, and Y. Gu
    IEEE Transactions on Signal Processing, 63(15):4037-4050, Aug. 1, 2015
    [IEEE] [pdf]
  • On the Null Space Constant for Lp Minimization
    L. Chen and Y. Gu
    IEEE Signal Processing Letters, 22(10):1600-1603, Oct. 2015
    [IEEE] [arXiv] [pdf]
  • Restricted Isometry Property of Subspace Projection Matrix Under Random Compression
    X. Shen and Y. Gu
    IEEE Signal Processing Letters, 22(9):1326-1330, Sept. 2015
    [IEEE] [arXiv] [pdf]
  • CAR: Coding-Aware Opportunistic Routing for Unicast Traffic in Wireless Mesh Networks
    H. Liu, H. Yang, Y. Wang, B. Wang, and Y. Gu
    Journal of Network and Systems Management, 23(4):1104–1124, Oct. 2015
    [Springer]
  • Fast Sparse Recovery via Non-Convex Optimization
    L. Chen and Y. Gu
    IEEE Global Conference on Signal and Information Processing (GlobalSIP), 1275-1279, Dec. 14-16, 2015, Orlando, USA
    [IEEE] [code]
  • Anti-sparse Representation for Continuous Function by Dual Atomic Norm with Application in OFDM
    X. Shen and Y. Gu
    IEEE Global Conference on Signal and Information Processing (GlobalSIP), 270-274, Dec. 14-16, 2015, Orlando, USA
    [IEEE]
  • Generalized Graph Signal Sampling and Reconstruction
    X. Wang, J. Chen, and Y. Gu
    IEEE Global Conference on Signal and Information Processing (GlobalSIP), 567-571, Dec. 14-16, 2015, Orlando, USA
    [IEEE]
  • Optimizing Spectral Diversity for Graph Signal Coarsening
    P. Liu, X. Wang, and Y. Gu
    IEEE Global Conference on Signal and Information Processing (GlobalSIP), 858-862, Dec. 14-16, 2015, Orlando, USA
    [IEEE]
  • Phase retrieval using iterative projections: Dynamics in the large systems limit
    G. Li, Y. Gu, and Y. M. Lu
    Annual Allerton Conference on Communication, Control, and Computing (Allerton), 1114 - 1118, Sept. 29-Oct. 2, 2015, Monticello, USA
    [IEEE]
  • Subspace Projection Matrix Completion on Grassmann Manifold
    X. Shen and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 3297-3301, Apr. 19-24, 2015, Brisbane, Australia
    [IEEE] [slides]
  • Local and Global Optimality of Lp Minimization for Sparse Recovery
    L. Chen and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 3596-3600, Apr. 19-24, 2015, Brisbane, Australia
    [IEEE] [poster]
  • Downsampling for Sparse Subspace Clustering
    X. Mao, X. Wang, and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 3806-3810, Apr. 19-24, 2015, Brisbane, Australia
    [IEEE] [poster] [code]
  • Dynamic Zero-point Attracting Projection for Time-varying Sparse Signal Recovery
    J. Zhou, L. Chen, and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 5485-5489, Apr. 19-24, 2015, Brisbane, Australia
    [IEEE] [poster]
  • Averaging Random Projection: A Fast Online Solution for Large-Scale Constrained Stochastic Optimization
    J. Liu, Y. Gu, and M. Wang
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 3586-3590, Apr. 19-24, 2015, Brisbane, Australia
    [IEEE] [poster]

2014


  • The Convergence Guarantees of a Non-convex Approach for Sparse Recovery
    L. Chen and Y. Gu
    IEEE Transactions on Signal Processing, 62(15):3754-3767, Aug. 1, 2014
    [IEEE] [arXiv] [pdf] [code]
  • Maximum Likelihood Estimation from Sign Measurements with Sensing Matrix Perturbation
    J. Zhu, X. Wang, X. Lin, and Y. Gu
    IEEE Transactions on Signal Processing, 62(15):3741-3753, 2014
    [IEEE] [arXiv] [pdf]
  • Compressed Subspace Clustering: A Case Study
    X. Mao and Y. Gu
    IEEE Global Conference on Signal and Information Processing (GlobalSIP), 616-620, Dec. 3-5, 2014, Atlanta, USA
    [IEEE] [poster]
  • Iterative Reconstruction of Graph Signal in Low-frequency Subspace
    X. Wang, P. Liu, and Y. Gu
    IEEE Global Conference on Signal and Information Processing (GlobalSIP), 611-615, Dec. 3-5, 2014, Atlanta, USA
    [IEEE]
  • Graph Signal Coarsening: Dimensionality Reduction in Irregular Domain
    P. Liu, X. Wang, and Y. Gu
    IEEE Global Conference on Signal and Information Processing (GlobalSIP), 966-970, Dec. 3-5, 2014, Atlanta, USA
    [IEEE]
  • Learning Distributed Jointly Sparse Systems by Collaborative LMS
    Y. Gu and M. Wang
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 7278-7282, May 4-9, 2014, Florence, Italy
    [IEEE]
  • Coarsening Graph Signal with Spectral Invariance
    P. Liu, X. Wang, and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 1075-1079, May 4-9, 2014, Florence, Italy
    [IEEE] [code]
  • The Convergence Guarantees of a Non-convex Approach for Sparse Recovery Using Regularized Least Squares
    L. Chen and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 3374-3378, May 4-9, 2014, Florence, Italy
    [IEEE] [code]
  • Robust Off-Grid Recovery from Compressed Measurements
    X. Shen, J. K. Romberg, and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 3379-3383, May 4-9, 2014, Florence, Italy
    [IEEE] [code]
  • On the Theoretical Analysis of Cross Validation in Compressive Sensing
    J. Zhang, L. Chen, P. T. Boufounos, and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 3394-3398, May 4-9, 2014, Florence, Italy
    [IEEE] [code]
  • Sparse Constraint Affine Projection Algorithm with Parallel Implementation and Application in Compressive Sensing
    D. Yin, H. C. So, and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 7288-7292, May 4-9, 2014, Florence, Italy
    [IEEE] [code]
  • Defense Against Sybil Attacks in Directed Social Networks
    P. Liu, X. Wang, X. Che, Z. Chen, and Y. Gu
    International Conference on Digital Signal Processing (DSP), 239-243, Aug. 20-23, 2014, Hong Kong, China
    [IEEE] [slides] [code]
  • Robust Recovery of Low-Rank Matrices via Non-Convex Optimization
    L. Chen and Y. Gu
    International Conference on Digital Signal Processing (DSP), 355-360, Aug. 20-23, 2014, Hong Kong, China
    [IEEE] [slides] [code]
  • Robust Sparse Recovery via Non-Convex Optimization
    L. Chen and Y. Gu
    International Conference on Digital Signal Processing (DSP), 742-747, Aug. 20-23, 2014, Hong Kong, China
    [IEEE] [slides]

2013


  • Perturbation Analysis of Orthogonal Matching Pursuit
    J. Ding, L. Chen, and Y. Gu
    IEEE Transactions on Signal Processing, 61(2):398-410, Jan. 15, 2013
    [IEEE] [arXiv] [pdf]
  • On the Performance Bound of Sparse Estimation With Sensing Matrix Perturbation
    Y. Tang, L. Chen, and Y. Gu
    IEEE Transactions on Signal Processing, 61(17):4372-4386, Sept. 1, 2013
    [IEEE] [arXiv] [pdf]
  • Oracle-Order Recovery Performance of Greedy Pursuits with Replacement Against General Perturbations
    L. Chen and Y. Gu
    IEEE Transactions on Signal Processing, 61(18):4625-4636, Sept. 15, 2013
    [IEEE] [arXiv] [pdf]
  • Edge Balance Ratio: Power Law from Vertices to Edges in Directed Complex Networks
    X. Wang, Z. Chen, P. Liu, and Y. Gu
    IEEE Journal of Selected Topics in Signal Processing, 7(2):184-194, April 2013
    [IEEE] [arXiv] [pdf]
  • Robust Zero-point Attraction Least Mean Square Algorithm on Near Sparse System Identification
    J. Jin, Q. Qu, and Y. Gu
    IET Signal Processing, 7(3): 210-218, May 2013
    [IEEE] [arXiv] [pdf]
  • Combined Power Control and Link Selection in Device-to-device Enabled Cellular Systems
    Y. Cheng, Y. Gu, and X. Lin
    IET Communications, 7(12):1221-1230, Aug. 13, 2013
    [IEEE]
  • Relation between Exact and Robust Recovery for F-minimization: A Topological Viewpoint
    J. Liu, J. Jin, and Y. Gu
    IEEE International Symposium on Information Theory (ISIT), 859-863, July 7-12, 2013, Istanbul, Turkey
    [IEEE]
  • A Greedy Approach to Linear Prediction with Sparse Residuals
    Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 8154-8158, May 26-31, 2013, Vancouver, Canada
    [IEEE]
  • From Least Squares to Sparse: A Non-convex Approach with Guarantee
    L. Chen and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 5875-5879, May 26-31, 2013, Vancouver, Canada
    [IEEE]
  • Adaptive Speech Enhancement Using Sparse Prior Information
    Z. Xiang and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 7025-7029, May 26-31, 2013, Vancouver, Canada
    [IEEE]
  • Backtracking Matching Pursuit with Supplement Set of Arbitrary Size
    L. Chen and Y. Gu
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 6541-6545, May 26-31, 2013, Vancouver, Canada
    [IEEE]
  • On Estimating the Loss of Signal-to-Noise Ratio in Compressive Sensing based Systems
    Y. Wang, L. Chen, and Y. Gu
    IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP), 54-57, July 6-10, 2013, Beijing, China
    [IEEE]

2012 


  • Performance Analysis of L0 Norm Constraint Least Mean Square Algorithm
    G. Su, J. Jin, Y. Gu, and J. Wang
    IEEE Transactions on Signal Processing, 60(5): 2223-2235, May 2012
    [IEEE] [arXiv] [pdf]
  • Proof of Convergence and Performance Analysis for Sparse Recovery via Zero-point Attracting Projection
    X. Wang, Y. Gu, and L. Chen
    IEEE Transactions on Signal Processing, 60(8): 4081-4093, Aug. 2012
    [IEEE] [arXiv] [pdf]
  • Retrieval of Sparse Solutions of Multiple-Measurement Vectors via Zero-point Attracting Projection
    Y. You, L. Chen, Y. Gu, W. Feng, and H. Dai
    Signal Processing, 92(12): 3075-3079, 2012
    [ELSEVIER] [arXiv] [pdf]
  • Zero-point attracting projection algorithm for sequential compressive sensing
    Y. You, J. Jin, W. Duan, N. Liu, Y. Gu, and J. Yang
    IEICE Electronics Express, 9(4):314-319, 2012
    [arXiv] [pdf]
  • TCP with Hop-Oriented Network Coding in Multi-Radio Multi-Channel Wireless Mesh Networks
    H. Liu and Y. Gu
    IET Networks, 1(3):171-180, Sept. 2012
    [IEEE] [pdf]
  • Incorporating Network Coding into TCP Grounded on Network Utility Maximization in Multi-Radio Multi-Channel Wireless Mesh Networks
    H. Liu and Y. Gu
    China Communications, 9(6):28-35, 2012
    [pdf]
  • Performance Enhancement and Utility Maximization for TCP in Wireless Mesh Networks
    H. Liu, J. Chen, S. Song, and Y. Gu
    China Communications, 9(4):35-44, 2012
    [pdf]
  • Efficient Recovery of Block Sparse Signals via Zero-point Attracting Projection
    J. Liu, J. Jin, and Y. Gu
    IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3333-3336, March 25-30, 2012, Kyoto, Japan
    [IEEE] [arXiv]
  • Robustness of Orthogonal Matching Pursuit for Multiple Measurement Vectors in Noisy Scenario
    J. Ding, L. Chen, and Y. Gu
    IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3813-3816, March 25-30, 2012, Kyoto, Japan
    [IEEE]
  • Quantization Reference Voltage of the Modulated Wideband Converter
    Y. Wang, L. Chen, and Y. Gu
    IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3681-3684, March 25-30, 2012, Kyoto, Japan
    [IEEE] [arXiv]
  • Greedy Pursuits: Stability of Recovery Performance against General Perturbations
    L. Chen, J. Chen, and Y. Gu
    International Conference on Computing, Networking and Communications (ICNC), 897-901, Jan. 30 - Feb. 2, 2012, Hawaii, USA
    [IEEE]
  • Performance Analysis of Orthogonal Matching Pursuit under General Perturbations
    J. Ding, L. Chen, and Y. Gu
    International Conference on Computing, Networking and Communications (ICNC), 892-896, Jan. 30 - Feb. 2, 2012, Hawaii, USA
    [IEEE]

Before 2012


  • Optical Analog-to-Digital Conversion System based on Compressive Sampling
    H. Nan, Y. Gu, and H. Zhang
    IEEE Photonics Technology Letters, 23(2):67-69, Jan. 15, 2011
    [IEEE] [pdf]
  • A stochastic gradient approach on compressive sensing signal reconstruction based on adaptive filtering framework
    J. Jin, Y. Gu, and S. Mei
    IEEE Journal of Selected topics in Signal Processing, 4(2):409-420, April 2010
    [IEEE] [arXiv] [pdf]
  • L0 Norm Constraint LMS for Sparse System Identification
    Y. Gu, J. Jin, and S. Mei
    IEEE Signal Processing Letters, 16(9):774-777, Sept. 2009
    [IEEE] [arXiv] [pdf]
  • Particle filter based multi-camera integration for face 3D-pose tracking
    Y. Gu, Y. Chen, Z. Jiang, and K. Tang
    International Journal of Wavelets Multiresolution and Information Processing, 4(4):677-690, 2006
    [WOS]
  • Novel adaptive particle filters in robot localization
    Z. Jiang and Y. Gu
    Acta Automatica Sinica, 31(6):833-838, 2005
    [AAS]
  • LMS algorithm with gradient descent filter length
    Y. Gu, K. Tang, and H. Cui
    IEEE Signal Processing Letters, 11(3): 305-307, March 2004
    [IEEE]
  • Superior step-size theorem and its application - Parallel variable step-size LMS filters algorithm
    Y. Gu, K. Tang, and H. Cui
    Science in China Series F-Information Sciences, 47(2):151-160, March 2004
    [Springer]
  • Convergence analysis of deficient-length LMS filter and optimal length sequence to model exponential decay impulse response
    Y. Gu, K. Tang, H. Cui, and W. Du
    IEEE Signal Processing Letters, 10(1):4-7, Jan. 2003
    [IEEE]
  • Optimal variable step-size LMS model and algorithm with independence assumption
    Y. Gu, K. Tang, H. Cui, and W. Du
    Science in China Series F-Information Sciences, 46(6):409-419, Dec. 2003
    [Springer]
  • Modifier formula on mean square convergence of LMS algorithm
    Y. Gu, K. Tang, H. Cui, and W. Du
    IEE Electronics Letters, 38(19):1147-1148, Sept. 2002
    [IEEE]
  • A smart tracking-based jamming scheme for signals with periodic synchronization sequences
    Y. Chen, F. He, J. Yan, X. Chen, and Y. Gu
    International Conference on Wireless Communications and Signal Processing (WCSP), 1-5, Nov. 9-11, 2011, Nanjing, China
    [IEEE]
  • A New Mechanism to Incorporate Network Coding into TCP in Multi-Radio Multi-Channel Wireless Mesh Networks
    H. Liu, J. Chen, and Y. Gu
    International Conference on Mobile Ad-hoc and Sensor Networks (MSN), 256-260, Dec. 16-18, 2011, Beijing, China
    [IEEE]
  • Dual-pricing Policy for Controller-side Strategies in Demand Side Management
    S. Yue, J. Chen, Y. Gu, C. Wu, and Y. Shi
    IEEE International Conference on Smart Grid Communications (SmartGridCom) , 357-362, Oct. 17-20, 2011, Brussels, Belgium
    [IEEE]
  • SAUR: A Service Aided UWB Routing for Wireless Mesh Networks
    S. Song, F. Zhu, Y. Gu, X. Guo, and Y. Wei
    IEEE International Conference on Communications (ICC), 321-327, May 23-27 2010, Cape Town, South Africa
    [IEEE]
  • Sparse constraint multidelay frequency adaptive filtering algorithm for echo cancellation
    J. Jin, Y. Gu, and S. Mei
    International Conference on Audio, Language, and Image Processing (ICALIP), 362-366, Nov. 23-25, 2010, Shanghai, China
    [IEEE]
  • A calibration system and perturbation analysis for the Modulated Wideband Converter
    L. Chen, J. Jin, and Y. Gu
    IEEE International Conference on Signal Processing (ICSP), 78-81, Oct. 24-28, 2010, Beijing, China
    [IEEE]
  • Performance analysis of L0-LMS with uncorrelated Gaussian input signal
    G. Su, J. Jin, and Y. Gu
    IEEE International Conference on Signal Processing (ICSP), 235-238, Oct. 24-28, 2010, Beijing, China
    [IEEE]
  • An improved greedy algorithm for signal recovery from random measurements
    J. Jin, Y. Gu, and S. Mei
    IEEE International Conference on Signal Processing (ICSP), 82-85, Oct. 24-28, 2010, Beijing, China
    [IEEE]
  • Sparse LMS for system identification
    Y. Chen, Y. Gu, and A.O. Hero
    IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , 3125-3128, Apr. 19-24, 2009, Taipei, Taiwan
    [IEEE]
  • Detection of Roads in SAR Images using Particle Filter
    Y. Chen, Q. Yang, Y. Gu, and J. Yang
    IEEE International Conference on Image Processing (ICIP), 2337 - 2340, 8-11 Oct. 2006, Atlanta, USA
    [IEEE]
  • Novel Color-based Target Representation for Visual Object Tracking
    Y. Gu, Y. Chen, and J. Wang
    IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP), II201-204, May 14-19, 2006, Toulouse, France
    [IEEE]
  • Parallel NLMS Filters with Stochastic Active Taps and Step-sizes for Sparse System Identification
    Y. Li, Y. Gu, and K. Tang
    IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP), III109-112, May 14-19, 2006, Toulouse, France
    [IEEE]
  • Enhanced Stochastic Taps NLMS Filter with Efficient Sparse Taps Localization
    X. Liu, Y. Li, Y. Gu, and K. Tang
    International Conference on Signal Processing (ICSP), Nov. 16-20, 2006, Beijing, China
    [IEEE]
  • Particle filter based road detection in SAR image
    Y. Chen, Y. Gu, J. Gu, and J. Yang
    IEEE International Symposium on Microwave, Antenna, Propagation and EMC Technologies for Wireless Communications (MAPE), 301-305, Aug. 8-12, 2005, Beijing, China
    [IEEE]
  • Network Echo Canceller with Active Taps Stochastic Localization
    Y. Gu, Y. Chen, and K. Tang
    IEEE International Symposium on Communications and Information Technologies (ISCIT), 556-559, Oct. 12-14, 2005, Beijing, China
    [IEEE]
  • Sufficient condition for tap-length gradient adaptation of LMS algorithm
    Y. Gu, K. Tang, and H. Cui
    IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), II461-464, May 17-21, 2004, Montreal, Canada
    [IEEE]
  • Select Eigenfaces For Face Recognition with One Training Sample per Subject
    J. Wang, Y. Gu, K. N. Plataniotis, and A. N. Venetsanopoulos
    International Conference on Control, Automation, Robotics and Vision (ICARCV), 391-396, Dec. 6-9, 2004, Kunming, China
    [IEEE]
  • Exact convergence analysis of LMS algorithm for tapped-delay I.I.D. input with large stepsize
    Y. Gu, K. Tang, H. Cui, and W. Du
    IEEE Region 10 Conference on Computers, Communications, Control, and Power Engineering (TENCON), 1298-1301, Oct. 28-31, 2002, Beijing, China
    [IEEE]
  • Optimal stepsize update equation in nonstationary environment and OVS-LMSII algorithm
    Y. Gu, K. Tang, H. Cui, and W. Du
    IEEE International Conference on Communication, Circuits, and Systems (ICCCS), 1252-1256, June 29-July 1, 2002, Chengdu, China
    [IEEE]