SGN-90206 International Doctoral Seminar in Signal Processing, 1-8 cr

Toteutuskerta SGN-90206 2016-03

Kuvaus

Learning sparse representations for image and signal modeling The main goal of this course is to provide the student with an understanding of the most important aspects of the theory underlying sparse representation and, more in general, of sparsity as a form of regularization in learning problems. Students will have the opportunity to develop and understand the main algorithms for learning sparse models and computing sparse representations. These methods have wide applicability in computer science, and these will be a useful background for their research. In particular, this course aims at: - Presenting the most important aspects of the theory underlying sparse representations, and in particular the sparse-coding and dictionary-learning problems. - Illustrating the main algorithms for sparse coding and dictionary learning, with a particular emphasis on solutions of convex optimization problems that are widely encountered in engineering. - Providing a broad overview of the applications involving sparse representations, in particular those in the imaging field.

Opetus

Periodi 5
Opetusmuodot
Vastuuhenkilö Alessandro Foi

Arvosteluasteikko

Evaluation scale passed/failed will be used on the course

Suoritusvaatimukset

Active participation to the lectures (2cr). Optional project work where the participant has to apply the learned methods to a problem from her/his own field of research (2cr).

Lisätietoja toteutuksesta

In recent years, a large amount of multidisciplinary research has been conducted on sparse representations i.e., models that describe signals/images as a linear combination of a few atoms belonging to a dictionary. Sparse representation techniques had lately an exponential growth and now they represent one of the leading tools for solving ill-posed inverse problems in diverse areas, including image/signal processing, statistics, machine learning, computer vision. The course presents the basic theory of sparse representations and dictionary learning, and illustrates the most successful applications of sparse models in imaging and signal processing. Particular emphasis will be given to modern proximal methods for solving convex optimization problems such as \ell-1 regularisation (e.g., BPDN, LASSO). Recent developments of sparse models will be also overviewed. Detailed program: Basics on linear orthonormal representations Redundant representations and Sparse Coding (Minimum \ell-0 norm) - Sparse Coding w.r.t redundant dictionaries. Greedy Algorithms: Matching Pursuit, Orthogonal Matching Pursuit. - An overview of theoretical guarantees and convergence results. Convex Relaxation of the Sparse Coding Problem (Minimum \ell-1 norm) - Sparse coding as a convex optimization problem, BPDN (LASSO), connections with \ell-0 solutions. - Notes on other norms, visual intuition. Theoretical guarantees. - Minimum \ell-1 Sparse Coding Algorithms: Iterative Reweighted Least Squares, Proximal Methods, Iterative Soft Thresholding, ADMM. Dictionary Learning - Dictionary Learning Algorithms: Gradient Descent, MOD, KSVD Sparsity in engineering applications - Main problems involving sparsity as a regularization prior: Denoising, Inpainting, Super-resolution, Deblurring. - Dictionary Learning and Sparse Coding for Classification and Anomaly Detection


Oppimateriaali

Tyyppi Nimi Tekijä ISBN Lisätiedot Kieli Tenttimateriaali
Book Sparse and Redundant Representations M. Elad English No
Summary of lectures Sparse Representations: Theory and Applications B. Wohlberg Notes of the course held in 2013 at Politecnico di Milano. English No