Course ID ENS Lyon: Inverse problems and high dimension - MATH5212

Course ID ECL: S9 Advanced Tools for Learning : when Convexity meets Sparsity - S9 MD fo IM3.1

Course Description

Sparsity and convexity are ubiquitous notions in Machine Learning and Statistics. In this course, we study the mathematical foundations of some powerful methods based on convex relaxation: L1-regularisation techniques in Statistics and Signal Processing; Nuclear Norm minimization in Matrix Completion. These approaches turned to be Semi-Definite representable (SDP) and hence tractable in practice. The theoretical part of the course will focus on the guarantees of these algorithms under the sparsity assumption. The practical part of this course will present the standard solvers of these learning problems.
Keywords: L1-regularisation; Matrix Completion; Semi-Definite Programming; Proximal methods;

"Nothing is more practical than a good theory." - V. Vapnik


  • [Giraud] Introduction to High-Dimensional Statistics, Christophe Giraud, Chapman and Hall/CRC
  • [Wainwright] High-Dimensional Statistics: A Non-Asymptotic Viewpoint, Martin J. Wainwright, Cambridge University Press
  • [Foucart] A Mathematical Introduction to Compressive Sensing, Simon Foucart and Holger Rauhut.
  • [Le Gall] Intégration, Probabilités et Processus Aléatoires (PDF, 248 pages)

Course Organizers


The guidelines can be found in this document. The couples ([ref paper], {names}) are:
  • [1] El Messi, Jia, Jousseaume, Nguyen.
  • [5] Bennour, Laassel, S'Guiar, Zilali.
  • [6] Fouquet, Roche.
  • [7] Greatti, Milhac, Rehm.
  • [8] Benyachou, Fossey, Goutal.
  • [9] Cornu, Gaubil, Lerond, Mohand.
  • [10] Marlhens, Passard, Simoes.
  • [13] Davy, Gjorgjevski, Pak.
Based on the references:
  • [1] Beck, A., & Teboulle, M. (2009). A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM Journal on Imaging Sciences, 2(1), 183–202.
  • [2] E. Ndiaye, O. Fercoq, A. Gramfort, and J. Salmon, “Gap Safe Screening Rules for Sparsity Enforcing Penalties.,” J. Mach. Learn. Res., 2017.
  • [3] M. Kabanava, R. Kueng, and H. Rauhut, “Stable low-rank matrix recovery via null space properties,” Information and Inference, vol. 5, no. 4, pp. 405–441, 2016.
  • [4] Q. Denoyelle, V. Duval, G. Peyré, and E. Soubies, “The sliding Frank–Wolfe algorithm and its application to super-resolution microscopy,” Inverse problems, vol. 36, no. 1, p. 014001, Jan. 2020.
  • [5] Bouwmans, T., & Zahzah, E.-H. (2014). Robust PCA via Principal Component Pursuit - A review for a comparative evaluation in video surveillance. Computer Vision and Image Understanding, 122, 22–34.
  • [6] J. Yang, J. Wright, T. Huang, and Y. Ma, Image Super-Resolution Via Sparse Representation, IEEE Trans. Image Process., vol. 19, no. 11, pp. 2861 –2873, November 2010.
  • [7] Mairal, J., Elad, M., & Sapiro, G. (2008). Sparse Representation for Color Image Restoration. IEEE Transactions on Image Processing, 17(1), 53–69.
  • [8] Patrik O. Hoyer. Non-negative Matrix Factorization with Sparseness Constraints. Journal of machine learning research, 5(Nov):1457--1469, 2004.
  • [9] Jacob, L., Obozinski, G., & Vert, J. P. (2009, June). Group lasso with overlap and graph lasso. In Proceedings of the 26th annual international conference on machine learning (pp. 433-440).
  • [10] Loh, P. L., & Wainwright, M. J. (2011). High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity. In Advances in Neural Information Processing Systems (pp. 2726-2734).
  • [11] Berthet, Q., & Rigollet, P. (2013). Optimal detection of sparse principal components in high dimension. The Annals of Statistics, 41(4), 1780-1815.
  • [12] Vladimir Koltchinskii, Alexandre B. Tsybakov, Karim Lounici. Nuclear norm penalization and optimal rates for noisy low rank matrix completion. The Annals of Statistics, 39(5), 2302-2329.
  • [13] Two papers on Sketching:
    Antoine Chatalic, Rémi Gribonval, Nicolas Keriven. Large-Scale High-Dimensional Clustering with Fast Sketching . In ICASSP, 2018.
    Rémi Gribonval, Antoine Chatalic, Nicolas Keriven, Vincent Schellekens, Laurent Jacques, Philip Schniter. Sketching Datasets for Large-Scale Learning (long version). arXiv preprint arXiv:2008.01839.

Class Time and Location

Winter quarter (January - March, 2022).
Lecture: Monday afternoon 2:00-5:00 or 6:00
Ecole Centrale Lyon, Building W1, Room 101 (second floor)

Contact Info

Students involvedDate and timeLecturerDescriptionMaterials and references
S9 MD fo IM3.1
-> ECL students only
Jan., 3
(2-6pm) room 105
Yohann Probabilty toolbox [Giraud; P.17, 22, 24, 219, 221, 223] [Wainwright; Chap. 2] Notes
S9 MD fo IM3.1
-> ECL students only
Jan., 10
(2-6pm) room 105
Yohann Optimization toolbox [Foucart; App. B] Notes
S9 MD fo IM3.1 and MATH5213
-> Full group
Jan., 17
Rémi Yohann Lecture 1 [Foucart; Chap. 2 and 3] Notes
S9 MD fo IM3.1 and MATH5213
-> Full group
Jan., 24
Rémi Lecture 2 Room 14 buiding W1 bis
S9 MD fo IM3.1
-> Full group
Jan., 31
Rémi Lecture 3 [Foucart]
S9 MD fo IM3.1 and MATH5213
-> Full group
Feb., 7
Yohann Lecture 4 [Foucart] Notes + [RMT RIP] + [Lower Bound]
S9 MD fo IM3.1
-> ECL students only
Feb., 14
Yohann Practical on Optimization (ISTA, CVPY, OMP) [Jupyter Notebook] optimization.ipynb
None Feb., 21 None BREAK
S9 MD fo IM3.1 and MATH5213
-> Full group
Feb., 28
Rémi Lecture 5
-> Full group
March, 14
Yohann Lecture 6
S9 MD fo IM3.1 and MATH5213
-> Full group
March, 21
Examinators: Remi and Yohann Exam