Compressive sensing (CS) makes heavy use of optimization to determine or approximate a sparse solution to an underdetermined linear system. The two essential issues are:
 How can we guarantee a sparse solution is unique and
 How can we guarantee that we can find or approximate the sparse solution with a low complexity algorithm
The latter problem is usually expressed as an optimization problem, especially in the presence of noise, attempting to find a sparse minimum of a leastsquares cost function:
where is a least square cost function to be minimized with a sparse variable .
The commonly used least squares function, usually motivated by the presence of Gaussian noise in the measurements, is quite useful in most signal processing applications. However, more general cost functions are often necessary in other applications, such as machine learning, classification and detection and CS in the presence of severe quantization.
For that reason, Sohail Bahmani and Bhiksha Raj and I, desire to understand and solve the problem
Towards that end, we have developed a new algorithm, the Gradient Support Pursuit (GraSP), together with appropriate theory and solution guarantees. Our work generalizes the very wellknown CoSaMP algorithm, the Restricted Isometry Property (RIP), and the signal reconstruction guarantees common in the CS literature. To demonstrate the algorithm in practice we also applied it to the sparse logistic regression problem [1, 2, 3]. Some of my earlier work on 1bit CS algorithms, also uses the same ideas, although the theoretical guarantees do not immediately apply [1]. The overall algorithm is summarized in the figure below.
Pursuing the Support of the Gradient
A fundamental step in all sparse recovery algorithms is the correlation of the residual at the current signal estimate with the measurement matrix, i.e. the operation . The output of this operation is a good indicator of the important components to be used in describing the signal and is often referred to as a “proxy step,” a correlation step or a matched filter. This proxy also happens to be the gradient of evaluated at the current signal estimate . This is the main intuition behind GraSP: if we want to reduce by changing only a small (sparse) set of coefficients, the largest gradient components indicate the location of the coefficients with the most impact in the cost.
Generalizing the proxy step of CoSaMP to a general gradient computation is one of the two major modifications introduced in GraSP. The second generalizes a restricted pseudoinverse computation in CoSaMP, replacing it with a restricted function minimization.
By replacing the function with , GraSP reverts to the familiar CoSaMP, summarized in the figure below, in a parallel fashion to the one above.
Theoretical Guarantees
To guarantee the performance of this algorithm, we generalized the RIP to the Stable Restricted Hessian (SRH) and the Stable Restricted Linearization (SRL) properties. The SRH describes how the hessian of the function behaves at any sparse point in the search space, characterizing the ratio of largest to smallest curvature around any sparse point, restricted to sparse directions. The SRL extends the guarantees to functions that are not necessarily differentiable. Both of these properties are used to bound the approximation error of the secondorder Taylor expansion of the cost function, restricted to sparse canonical subspaces. In turn, this allows us to provide very good guarantees on the approximation performance of the algorithm.
For the standard least squares cost function, the SRH is identical to the RIP, and the guarantees provided are identical to the standard CS and CoSaMP guarantees. The interesting aspect, however, is that the cost function does not need to be convex in general but only when restricted to sparse inputs.
Sohail has put up a webpage for GraSP, where you can find code and examples to try.
[1] 
P. T. Boufounos, “Greedy sparse signal reconstruction from sign measurements,” Proc. Asilomar Conf. on Signals Systems and Comput., pp. 13051309, Pacific Grove, CA, November 14, 2009.

[2] 
S. Bahmani, B. Raj, and P. T. Boufounos, “Greedy SparsityConstrained Optimization,” Journal of Machine Learning Research, v. 14, pp. 807841, March, 2013.

[3] 
S. Bahmani, P. T. Boufounos, and B. Raj, “Greedy SparsityConstrained Optimization,” Proc. Asilomar Conf. Signals Systems and Computers, Pacific Grove, CA, November 69, 2011.
