Sylvain Arlot, “Advanced Course on Statistics”

Monday February 14 2011, 14.00 – 16.00 Aula Bianchi

Tuesday February 15 2011, 9.00 – 11.00 Aula Fermi

Thursday February 17 2011, 14.00 – 16.00 Aula Fermi

Tuesday February 22 2011, 14.00 – 16.00 Aula Dini

Wednesday February 23 2011, 9.00 – 11.00 Aula Bianchi

Scuola Normale Superiore

SYLVAIN ARLOT
CNRS-INRIA and ENS, Paris

Advanced Course on Statistics

Lecture 1. (Monday February 14) Statistical learning

  • the statistical learning learning problem
  • examples: prediction, regression, classification, density estimation
  • estimators: definition, consistency, examples
  • universal learning rates and No Free Lunch Theorems [1]
  • the estimator selection paradigm, bias-variance decomposition of the risk
  • data-driven selection procedures and the unbiased risk estimation principle

Lecture 2. (Tuesday February 15) Model selection for least-squares regression

  • ideal penalty, Mallows’ Cp
  • oracle inequality for Cp (i.e., non-asymptotic optimality of the corresponding model selection procedure), corresponding learning rates [2]
  • the variance estimation problem
  • minimal penalties and data-driven calibration of penalties: the slope heuristics [3,4]
  • algorithmic and other practical issues [5]

Lecture 3. (Thursday February 17) Linear estimator selection for least-squares regression [6]

  • linear estimators: (kernel) ridge regression, smoothing splines, k-nearest neighbours, Nadaraya-Watson estimators
  • bias-variance decomposition of the risk
  • the linear estimator selection problem: CL penalty
  • oracle inequality for CL
  • data-driven calibration of penalties: a new light on the slope heuristics

Lecture 4. (Tuesday February 22) Resampling and model selection

  • regressograms in heteroscedastic regression: the penalty cannot be a function of the dimensionality of the models [7] 
  • resampling in statistics: general heuristics, the bootstrap, exchangeable weighted bootstrap [8]  
  • study of a case example: estimating the variance by resampling  
  • resampling penalties: why do they work for heteroscedastic regression? oracle-inequality. comparison of the resampling weights [9]

Lecture 5. (Wendsday February 23) Cross-validation and model/estimator selection [10]  

  • cross-validation: principle, main examples 
  • cross-validation for estimating of the prediction risk: bias, variance  
  • cross-validation for selecting among a family of estimators: main properties, how should the splits be chosen?  
  • illustration of the robustness of cross-validation: detecting changes in the mean of a signal with unknown and non-constant variance [11]  
  • correcting the bias of cross-validation: V-fold penalization. Oracle-inequality. [12]

References

[1] Luc Devroye, Laszlo Gyorfi, and Gabor Lugosi. A probabilistic theory of pattern recognition, volume 31 of
Applications of Mathematics (New York). Springer-Verlag, New York, 1996.

[2] Pascal Massart. Concentration Inequalities and Model Selection, volume 1896 of Lecture Notes in Mathematics.
Springer, Berlin, 2007. Lectures from the 33rd Summer School on Probability Theory held in Saint-Flour, July 6-23, 2003.

[3] Lucien Birge and Pascal Massart. Minimal penalties for Gaussian model selection. Probab. Theory Related Fields, 138(1-2):33-73, 2007.

[4] Sylvain Arlot and Pascal Massart. Data-driven calibration of penalties for least-squares regression. J. Mach. Learn. Res., 10:245-279 (electronic), 2009. http://jmlr.csail.mit.edu/papers/v10/arlot09a.html

[5] Jean-Patrick Baudry, Cathy Maugis, and Bertrand Michel. Slope Heuristics : Overview and Implementation.
Technical Report 7223, INRIA, 2010. http://hal.archives-ouvertes.fr/hal-00461639/en/

[6] Sylvain Arlot and Francis Bach. Data-driven calibration of linear estimators with minimal penalties. Proceedings of NIPS 2009. http://arxiv.org/abs/0909.1884

[7] Sylvain Arlot. Choosing a penalty for model selection in heteroscedastic regression. Preprint. 2010. http://arxiv.org/abs/0812.3141

[8] Bradley Efron and Robert J. Tibshirani. An Introduction to the Bootstrap, volume 57 of Monographs on
Statistics and Applied Probability. Chapman and Hall, New York, 1993.

[9] Sylvain Arlot. Model selection by resampling penalization. Electronic Journal of Statistics, 3, (2009), 557-624 (electronic). http://dx.doi.org/10.1214/08-EJS196

[10] Sylvain Arlot and Alain Celisse. A survey of cross-validation procedures for model selection. Statist. Surv., 4:40-79, 2010. http://dx.doi.org/10.1214/09-SS054

[11] Sylvain Arlot and Alain Celisse. Segmentation of the mean of heteroscedastic data via cross-validation. Statistics and Computing, 2010. http://arxiv.org/abs/0902.3977

[12] Sylvain Arlot. V-fold cross-validation improved: V-fold penalization. Preprint. 2008. http://fr.arxiv.org/abs/0802.0566