0000005389 20S 2SWS SE Stochastic Methods for Data-Driven Applications   Hilfe Logo

LV - Detailansicht

Wichtigste Meldungen anzeigenMeldungsfenster schließen
Allgemeine Angaben
Stochastic Methods for Data-Driven Applications 
Summer semester 2020
Chair of Theoretical Information Technology (Prof. Boche)
(Contact information)
Allocations: 1 
Angaben zur Abhaltung
Stochastics with random variables find applications in data-driven applications such as signal processing and machine learning. Especially, stochastic methods such as the concentration inequalities have become necessary ingredients of the mentioned applications. In this seminar, examples of technical results, as well as recent applications of selected stochastic methods, are discussed, where the particular interest lies in a deeper reflection of the proof methods applied to establish the mentioned results.

Exemplary topics on the technical results:
• Concentration inequality for scalar random variables, e.g., Chernov’s and Bernstein‘s tail bounds,
• Concentration inequality for martingales, e.g., Azuma’s inequality,
• Concentration inequality for matrices,
• Mixing property for Markov chains.

Exemplary topics on the applications:
• Effects of noise in the first-order optimization methods (e.g., Gradient Descent),
• Investigations on the generalization ability of supervised learning,
• Distributed algorithms and Markov chains,
• Community detection and clustering.
Basic knowledge of analysis, linear algebra, and elementary stochastics. Basic knowledge in information theory and/or signal theory.
After successful completion of the module, the participants are able:
1) to investigate the effect of randomness as a disturbance factor in the data-driven algorithms, and propose some approaches for countermeasure,
2) to recognize and discuss the potential of the randomness as a factor leading to the increase of the performance of data-driven algorithms.
3) to become acquainted with a new mathematical field, to prepare and to present a scientific talk

Seminar: Presentation of the lecturers and students based on scientific publications.
Für die Anmeldung zur Teilnahme müssen Sie sich in TUMonline als Studierende*r identifizieren.
R. Vershynin: “High-Dimensional Probability – An Introduction with Applications in Data Science, Cambridge University Press”, 2018

J. Tropp: “An Introduction to Matrix Concentration Inequalities” (Foundations and Trends in Machine Learning), Now Publ. Inc., 2015.

T Harsti, R. Tibshirini, and J. Friedman: “The Elements of Statistical Learning”, Springer-Verlag New York, 2009.

S. Boucheron, G. Lugosi, and P. Massart: “Concentration Inequalities: A Nonasymptotic Theory of independence”, Oxford University Press, 2016

M. Mitzenmacher and E. Upfal: “Probability and Computing: Randomization and Probabilistic Techniques in Algorithms and Data Analysis”, Cambridge University Press, 2017
Online information
e-learning course (moodle)