Since the 90's, sparsity has been playing an important role in several aspects of statistics, machine learning and signal processing, among other fields. The entire field of compressed sensing, which started half a decade ago, has developed around the idea of taking advantage of intrinsic sparsity in measured signals, thus allowing a drastic reduction in the number of samples needed to reconstruct them. In statistics, the assumption of sparsity was first successfully employed in nonparametric statistics, and then it was translated into the parametric domain with techniques such as the Lasso, which impose sparseness via an L1 convex relaxation.
In this talk, I will present some recent results on sparse estimation. In particular, I will introduce a Lasso-type method called Sparseva, which, unlike the original Lasso, is easy to tune for specific needs of prediction or variable selection. This technique is then extended to estimate general sparse rational models such Output-Error or Box-Jenkins structures.