Special Course in Machine Learning: Uncertainty in Machine Learning
Machine learning models make errors, but the harm from these occasional errors can be reduced if the models themselves report how uncertain they are. For example, a self-driving car can slow down whenever its perception system reports high uncertainty. However, getting the models to report well-calibrated uncertainty is a hard task, particularly in cases where the situation is slightly different from situations seen during model training. In this course, we are going to learn about the theory and practice of uncertainty in ML.
Prerequisites
This course requires a strong background in machine learning, neural networks, and probabilities.
Seminar time and location
- Mondays at 14:15 - 16:00, online - Please log in here on this course homepage to see the links for online participation in the seminar.
Topics by week
- Week 1 (Feb 21): Introduction
- Weeks 2-4: Classifier calibration (mostly based on Flach et al)
- Weeks 5-7: Aleatoric and epistemic uncertainty (mostly based on Hüllemeier & Waegeman)
- Weeks 8-11: Practical uncertainty in neural networks (mostly based on Tran et al)
- Weeks 12-15: Team project
Organization
The course consists of seminars, tests and a project. In the seminars, we will follow the material and discuss it. The material will be mostly videos that we watch together and stop frequently to discuss. Sometimes it is necessary to read or watch some material also before the seminar, to be better prepared. At the end of each seminar, there is a short test.
It will be possible to participate in the discussion/tests fully online via Zoom for those students who cannot attend physically (we start only virtually, but might change to hybrid meetings during the semester).
The last 4 weeks are reserved for a team project. The idea of a project is to implement some concepts covered during the course.
To pass the course one has to:
- contribute to selecting the material for one seminar;
- contribute to preparing the test for one seminar and score at least 70% in all other tests;
- participate in at least 70% of seminars;
- participate in a team project.
Materials
- a lecture by A. Amini as Lecture 7 " Evidential Deep Learning and Uncertainty" within the MIT course 6.S191 Introduction to Deep Learning
- a survey paper by E. Hüllermeier & W. Waegeman
- slides of the WUML 2020 tutorial "Uncertainty in Machine Learning" and video part 1 and video part 2 by E. Hüllermeier
- Slides of a course "Uncertainty in AI and Machine Learning" by E. Hüllermeier
- ECML-PKDD 2020 tutorial "Classifier calibration" by P. Flach, M. Perello Nieto, H. Song, M. Kull, T. Silva Filho.
- NeurIPS 2020 tutorial "Practical Uncertainty Estimation & Out-of-Distribution Robustness in Deep Learning" by D. Tran, J. Snoek, B. Lakshminarayanan
Contacts
Meelis Kull (meelis.kull@ut.ee)