Machine Learning Theory

CSC 482A/581A, Fall 2021

Lectures: Tuesdays, Wednesdays, and Fridays 3:30pm - 4:20pm, MacLaurin Building D116
Instructor: Nishant Mehta. Office hours: ECS 608, Wednesdays and Fridays, 4:30pm - 5:30pm.
TA: Mahdi Hajiabadi. Email: <firstinitial><lastname>@uvic

Recommended textbooks:


**Graduate student project** - due by 11:59pm PST, Thursday December 16th**
Any late submissions (even late by one minute) will incur a nontrivial penalty


Sketch of the course
Given an i.i.d. sample of data points with accompanying labels, how well can a learning algorithm predict the labels of future data? How well can a learning algorithm perform if it instead predicts the label of each data point given all the previously observed labeled data points, and moreover, the sequence of data points might be devised by some adversary rather than being an i.i.d. sample? The first question lies at the heart of statistical learning theory, while the second comes from the area of online learning.

In this course we will look at core techniques for analyzing the performance of machine learning algorithms for problems in statistical learning, as well as the design aspects of good machine learning methods. We will also look at variants of learning problems, potentially including learning from data that has been corrupted by an adversary (which requires noise-tolerant learning), semi-supervised learning, and active learning. A smaller but significant component of this course will focus on various online learning settings, including prediction with expert advice and multi-armed bandit problems. We will study radically different tools to analyze online learning algorithms, and, if there is time, we will also look at recent connections that unify the analysis of statistical (i.i.d.) learning and online learning algorithms.

In the schedule below, any information about future lectures is just a rough guide and might change.

Lectures
Date Topics Lecture Notes Additional reading
9/8 Course overview; Introduction Lecture 1
9/10 Mistake bound model; Halving algorithm Lecture 2
9/14 Learning linear separators Lecture 3
9/15 Perceptron Lecture 4
9/17 Consistency model Lecture 5 (SSBD): Section 3.1
9/21 PAC learning; Connections between learning models Lectures 6 & 7 Intro to statistical learning theory (Boucheron, Bousquet, Lugosi).
(SSBD): Chapter 5 for No-Free-Lunch theorem
9/22 Connections between learning models continued (SSBD): Chapter 4
9/24 Agnostic learning; Error decompositions Lectures 8 & 9
9/28 Concentration inequalities; Uniform convergence for finite classes
9/29 Growth function, Uniform convergence for infinite classes Lectures 10 & 11
10/1 Uniform convergence for infinite classes continued
10/5 VC dimension Lectures 12 & 13
10/6 Sauer's Lemma;
Uniform convergence for infinite classes under the realizability assumption
10/8 Compression schemes Lecture 14 (SSBD): Chapter 30
10/12 Bounded differences inequality; Rademacher complexity Lectures 15–18
10/13 Bounds based on Rademacher complexity
10/15 Connection to VC dimension
10/19 Properties of Rademacher complexity
10/20 Support vector machines Lectures 19 & 20
10/22 Margin bounds
10/26 Margin bounds continued
Littlestone dimension
See Section 21.1.1 of SSBD
10/27 Littlestone dimension continued
10/29 Decision-theoretic online learning; Hedge Lectures 23 & 24
11/2 Analysis of Hedge
11/3 Prediction with expert advice Lectures 25 & 26
11/5 Prediction with expert advice and exp-concave losses
11/9 Game theory Lecture 27
11/16 Stochastic multi-armed bandits; UCB Shivani Agarwal's lecture notes
11/17 Analysis of UCB
11/19 Non-stochastic multi-armed bandits; EXP3 Lecture 30 Sébastien Bubeck and Nicolò Cesa-Bianchi's bandits monograph
11/23 Non-stochastic multi-armed bandits; EXP3
11/24 Non-stochastic multi-armed bandits; EXP3
11/26 Extra topic! Online portfolio selection (CBL): Chapter 10
(Kalai and Vempala, 2002)
11/30 Project presentations
12/1 Project presentations
12/3 Project presentations