Lectures
- Given by Mark, mark@tartunlp.ai
- Tuesdays at 10:15-12:00
- Open to anyone interested
- Recordings and slides available after the lecture (see table below)
- General link to lectures: at Panopto
1. | Feb 8 | Motivation and overview (slides, video) |
2. | Feb 16 | Language models (slides, video) Sequential vs masked LM, probabilistic / neural, fixed / contextual embeddings |
3. | Feb 23 | Attention (slides, video) Encoder-decoder sequence-to-sequence, attention mechanism |
4. | Mar 1 | Self-attention (slides, video) Transformer as introduced for NMT/sequence-to-sequence, incl. self-attention, multiple attention heads and layers, discussion of why it works |
5. | Mar 8 | Transformers for NLP (slides) |
6. | Mar 15 | Transformers for images and sound (slides, video) |
7. | Mar 22 | Transformers for bio and health data, time series (slides, video) |
8. | Mar 29 | Output generation and evaluation (slides, video) |
9. | Apr 12 | Neural machine translation (slides, video) |
10. | Apr 26 | Statistical machine translation (slides) |