High-Dimensional Statistical Learning (HDL)


This module provides a detailed overview of the mathematical foundations of modern statistical learning by describing the theoretical basis and the conceptual tools needed to analyze and justify the algorithms. The emphasis is on problems involving high volumes of high dimensional datasets, and on dimension reduction techniques allowing to tackle them. The course involves detailed proofs of the main results and associated exercices.


PAC (probably approximately correct), random projection, PCA (principal component analysis), concentration inequalities, measures of statistical complexity


The prerequisites for this course include previous coursework in linear algebra, multivariate calculus, basic probability (continuous and discrete) and statistics. Students are expected to be able to follow a rigorous proof. Previous coursework in convex analysis, information theory, and optimization theory would be helpful but is not required.


  • The PAC framework (probably approximately correct) for statistical learning
  • Measuring the complexity of a statistical learning problem
  • Dimension reduction
  • Sparsity and convex optimization for large scale learning (time allowing)
  • Notion of algorithmic stability (time allowing)

Acquired skills

  • Understanding the links between complexity and overfitting
  • Knowing the mathematical tools to measure learning complexity
  • Understanding the statistical and algorithmic stakes of large-scale learning
  • Understanding dimension reduction tools for learning


Aline Roumy (responsible), Adrien Saumard, Maël Le Treust (invited teacher)