ECE 543: Lecture Schedule
The schedule will be updated and revised as the course progresses. Links to handwritten notes and to required reading from the lecture notes will be indicated on the left. All video recordings are published in the ECE 543 channel on Illinois Media Space (Illinois login required).
Preliminaries
- Tue Jan 26 [scribbles]
Thu Jan 28 [scribbles]
📖 Ch. 1
- Introduction and administrivia
Goals of learning
- Tue Feb 2 [scribbles]
Thu Feb 4 [scribbles]
📖 Ch. 2
- Concentration inequalities
- Chernoff method and subgaussian random variables
- Hoeffding's inequality
- McDiarmid's inequality, bounded differences
Basic theory
- Tue Feb 9 [scribbles]
Thu Feb 11 [scribbles]
📖 Ch. 5
- Formulation of the learning problem
- realizable case: concept and function learning
- Probably Approximately Correct (PAC) learning
- agnostic (model-free) learning
- consistency and uniform convergence of empirical means
- Empirical Risk Minimization
- Tue Feb 16 [scribbles]
Thu Feb 18 [scribbles]
Tue Feb 23 [scribbles]
📖 Ch. 6
- Empirical Risk Minimization: abstract risk bounds
- excess risk of ERM via uniform deviation
- bounding the uniform deviation via Rademacher averages, symmetrization
- structural properties of Rademacher averages, Finite Class Lemma
- Thu Feb 25 [scribbles]
Tue Mar 2 [scribbles]
Thu Mar 4 [scribbles]
📖 Ch. 7
- Vapnik-Chervonenkis classes
- shatter coefficients, VC dimension
- examples: intervals, half-spaces, axis-parallel rectangles
- Dudley classes determined by finite-dimensional function spaces
- growth of shatter coefficients: Sauer-Shelah lemma
- Tue Mar 9 [scribbles]
Thu Mar 11 [scribbles]
Tue Mar 16 [scribbles]
Thu Mar 18 [scribbles]
Tue Mar 23 [scribbles]
Thu Mar 25 [scribbles]
Tue Mar 30 [scribbles]
Thu Apr 1 [scribbles]
📖 Ch. 4
📖 Ch. 8
- Binary classification
- linear discriminant rules and generalized linear discriminant rules
- risk bounds for combined classifiers via surrogate losses
- weighted linear combinations of classifiers, margin
- kernel machines, RKHS
- generalization bounds for AdaBoost via surrogate losses
- neural nets
- Tue Apr 6
[scribbles]
📖 Ch. 9
- Regression with squared loss
- regression over a ball in RKHS
- regression over RKHS with additive regularization
Advanced topics
- Thu Apr 8
[scribbles]
Thu Apr 15
[scribbles]
Tue Apr 20
[scribbles]
Thu Apr 22
[scribbles]
📖 Ch. 3
📖 Ch. 13
- Stability of learning algorithms
- learnability without uniform convergence
- generalization error, stability of learning algorithms
- stability of ERM under strong convexity
- stability of Stochastic Gradient Descent (SGD)
- convergence and optimality guarantees for SGD
- Tue Apr 27
[scribbles]
Thu Apr 29
[scribbles]
📖 Ch. 14
- Online learning
- online learning model: Forecaster vs. Adversary, strategies
- performance criteria: cumulative loss, comparators, regret
- regret bounds for online convex optimization: convex Lipschitz functions; strongly convex Lipschitz functions
- the online perceptron algorithm: sample complexity via regret bounds
- generalization ability of online learning algorithms: online-to-batch conversion, martingale decomposition