# Van Dantzig Seminar

#### nationwide series of lectures in statistics

 Home      David van Dantzig      About the seminar      Upcoming seminars      Previous seminars      Slides      Contact

## Van Dantzig Seminar: 16 April 2015

#### Programme: (click names or scroll down for titles and abstracts)

 14:00 - 14:05 Opening 14:05 - 15:05 Sara van de Geer (ETH Zürich and Leiden University, Kloosterman professor) 15:05 - 15:25 Break 15:25 - 16:25 Gernot Müller (Universität Augsburg) 16:30 - 17:30 Reception
 Location: Leiden University, Gorlaeus building, Room: C06 (Directions)

## Titles and abstracts

• Sara van de Geer

Compressed sensing, sparsity and p-values

One of the problems in compressed sensing can be phrased as follows: suppose $$\beta^0 \in \mathbb{R}^p$$ is a unknown vector we want to recover from $$n \ll p$$ measurements $$y$$ where $$y = X \beta^0$$ and $$X$$ is a $$n\times p$$ matrix. The $$\ell_1$$-approach is to minimize $$\| \beta \|_1$$ subject to $$X \beta = y$$. This "works" if "most" of the entries of $$\beta^0$$ are exactly zero (sparseness) and $$X$$ satisfies the so-called null-space property on the support of $$\beta^0$$. The statistical variant of the compressed sensing problem sketched above generally faces a view complications. First of all, we usually will have noisy measurements $$y$$, i.e. we observe $$Y = X \beta^0 + \epsilon$$ with $$\epsilon$$ unobservable noise. Secondly, we usually do not believe that $$\beta^0$$ is sparse in the strong sense (many zeroes), but rather in the weak sense, where there are many non-zeroes but most of them are very small. Then in applications we often cannot "design" the matrix $$X$$ but it is just given to us. The linear model could not be appropriate (for example when the entries in $$Y$$ are binary). Finally, recovering $$\beta^0$$ exactly is not possible in the noisy situation, instead we aim at interval estimates or p-values. In our talk, we will present sparsity-regularized estimators which are shown to trade-off approximation error and estimation error (an example will be the so-called square-root Lasso). Our bounds have a learning-type of flavour in the sense that if the model is wrong one is as good as the best approximation within the model plus "small" error. Finally, we show a technique for establishing p-values.