Lectures
- Given by Mark, fishel@ut.ee
- Tuesdays at 10:15-12:00
- Open to anyone interested
- Online in Zoom (Meeting ID: 988 8168 4147, passcode: ati)
- hopefully in March or later we can switch to lectures on the spot, but for the time being they will be purely online :-(
- Recordings and slides available after the lecture (see table below)
1. | Feb 9 | Motivation and overview (slides, video) |
2. | Feb 16 | Language models (slides, video) Sequential vs masked LM, probabilistic / neural, fixed / contextual embeddings |
3. | Feb 23 | Attention (slides, video) Encoder-decoder sequence-to-sequence, attention mechanism |
4. | Mar 2 | Self-attention (slides, video) Transformer as introduced for NMT/sequence-to-sequence, incl. self-attention, multiple attention heads and layers, discussion of why it works |
5. | Mar 9 | Transformers for NLP (slides) |
6. | Mar 16 | Transformers for images and sound (slides, video) |
7. | Mar 23 | no lecture |
8. | Mar 30 | Transformers for bio and health data, time series (slides, video) |
9. | Apr 6 | Output generation and evaluation (slides, video) |
10. | Apr 13 | no lecture |
11. | Apr 20 | Neural machine translation (slides, video) |
12. | Apr 27 | Statistical MT (slides) |
13. | May 4 | Seminar |
14. | May 11 | MT in practice |
15. | May 18 | Past, present and future of MT |
16. | May 25 | Project poster session |