Computer Science: Machine Learning Theory

CSC 482A/581A, Fall 2019

Lectures: Mondays and Thursdays 11:30am - 12:50pm, HHB 116
Instructor: Nishant Mehta. Office hours: ECS 608, Tuesdays and Fridays, 4pm - 5pm.
TA: Hamid Shayestehmanesh. Email: shayesteh.<firstname>@gmail.com

Recommended textbooks:


**Graduate student project - due by 11:59pm PDT, Saturday December 14th**


Sketch of the course
Given an i.i.d. sample of data points with accompanying labels, how well can a learning algorithm predict the labels of future data? How well can a learning algorithm perform if it instead predicts the label of each data point given all the previously observed labeled data points, and moreover, the sequence of data points might be devised by some adversary rather than being an i.i.d. sample? The first question lies at the heart of statistical learning theory, while the second comes from the area of online learning.

In this course we will look at core techniques for analyzing the performance of machine learning algorithms for problems in statistical learning, as well as the design aspects of good machine learning methods. We will also look at variants of learning problems, potentially including learning from data that has been corrupted by an adversary (which requires noise-tolerant learning), semi-supervised learning, and active learning. A smaller but significant component of this course will focus on various online learning settings, including prediction with expert advice and multi-armed bandit problems. We will study radically different tools to analyze online learning algorithms, and, if there is time, we will also look at recent connections that unify the analysis of statistical (i.i.d.) learning and online learning algorithms.

In the schedule below, any information about future lectures is just a rough guide and might change.

Lectures
Date Topics Lecture Notes and Problem Sets Additional reading
9/5 Course overview; Introduction; Mistake bound model Lecture 1
9/9 Halving algorithm; Learning linear separators Lecture 2
9/12 Perceptron; Consistency model; PAC learning Lectures 3 & 4 (SSBD): Section 3.1
9/16 Connections between learning models Intro to statistical learning theory (Boucheron, Bousquet, Lugosi).
(SSBD): Chapter 5 for No-Free-Lunch theorem
9/19 Agnostic learning; Error decompositions;
Concentration inequalities; Uniform convergence for finite classes
Lectures 5 & 6
Problem Set 1
(SSBD): Chapter 4
9/23 Growth function, Uniform convergence for infinite classes
9/26 VC dimension; Sauer's Lemma;
Uniform convergence for infinite classes under the realizability assumption
Lectures 7 & 8
9/30 VC dimension; Sauer's Lemma
10/3 Uniform convergence for infinite classes under the realizability assumption
Lower bounds based on the VC dimension
Lecture 9
10/7 Bounded differences inequality; Rademacher complexity
Bounds based on Rademacher complexity
Lectures 10, 11, & 12
10/10 Bounds based on Rademacher complexity; Connection to VC dimension Problem Set 2
10/17 Properties of Rademacher complexity
10/21 Support vector machines; Margin bounds Lecture 13
10/24 Weak and Strong Learnability; Boosting
Training error bound for AdaBoost
Lecture 14
Problem Set 3
Rob Schapire's lecture notes.
A short introduction to boosting (Freund and Schapire)
10/28 Compression schemes; Generalization error bounds for AdaBoost
Proof that weak learnability implies strong learnability
Lecture 15
10/31 Margin bounds for AdaBoost Lecture 16
11/4 Decision-theoretic online learning; Hedge
Analysis of Hedge
Lecture 17
11/7 Game theory; Connection to AdaBoost Lectures 18 & 19
11/14 Game theory; Connection to AdaBoost
11/18 Prediction with expert advice
(maybe) Exp-concave loss functions
Lecture 20
Problem Set 4
11/21 Stochastic multi-armed bandits; UCB Shivani Agarwal's lecture notes
11/25 Non-stochastic multi-armed bandits; EXP3 Lecture 22 Sébastien Bubeck and Nicolò Cesa-Bianchi's bandits monograph
11/28 Non-stochastic multi-armed bandits; EXP3
12/2 Project presentations