Timetable
You can book when to present from this sheet. You can book a time even if you didn't select or found a paper yet. The earlier you book the better you can manage your schedule.
06.09 Week 0: Kick-off seminar
Introduction, organization of the seminar, questions.
slides | recording
13.09 Week 1: Explanation in artificial intelligence: Insights from the social sciences.
presented by Maksym Del
feedback | test | recording
20.09 Week 2: A survey on explainable artificial intelligence (xai): Toward medical xai
presented by Giacomo Magnifico
slides | feedback | test | recording
27.09 Week 3: Does the chimpanzee have a theory of mind?.
presented by Nicholas Sujecki
feedback | test | slides | recording
04.10 Week 4: Machine theory of mind. In International conference on machine learning
presented by Ondrej Sevcik
feedback | test | recording
11.10 Week 5: No presentation
18.10 Week 6: CX-ToM: Counterfactual explanations with theory-of-mind for enhancing human trust in image recognition models.
presented by Ekaterina Sedykh
feedback | test | recording
25.10 Week 7: Metrics for Explainable AI: Challenges and Prospects
presented by Roman Karpenko
feedback | test | recording
01.11 Week 8: Meaningfully Debugging Model Mistakes using Conceptual Counterfactual Explanations
presented by Braian O. Dias
feedback | test | slides | recording
08.11 Week 9: Introduction to SHAP and Counterfactual Shapley Additive Explanations.
presented by Marharyta Domnich
slides | colab notebook 1 | colab notebook 2 | recording
feedback | test
15.11 Week 10: What does LIME really see in images?
presented by Tõnis Hendrik Hlebnikov
feedback | test | slides | recording
22.11 Week 12: Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning.
presented by Chan Wai Tik
feedback | test | recording
29.11 Week 13: book - A multidimensional conception and measure of human-robot trust. Trust in Human-Robot Interaction
presented by Carolin Lüübek
paper - What Does it Mean to Trust a Robot. Steps Toward a Multidimensional Measure of Trust
feedback | test | recording
06.12 Week 14: What is Human-like?: Decomposing Robots’ Human-like Appearance Using the Anthropomorphic roBOT (ABOT) Database
presented by Hain Zuppur
feedback | test
Extra papers:
- Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods.
- OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms.
- "Why Should You Trust My Explanation?" Understanding Uncertainty in LIME Explanations, 2019
- Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
- A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle.
- Contrastive explanation: a structural-model approach.
- On the role of knowledge graphs in explainable AI.
- Improving Model Understanding and Trust with Counterfactual Explanations of Model Confidence.
- One Explanation is Not Enough: Structured Attention Graphs for Image Classification.
- Axiomatic Attribution for Deep Networks. - Integrated Gragients founding paper
- Grad-CAM: Visual Explanations From Deep Networks via Gradient-Based Localization.
- Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks.
- XAI tutorials: https://github.com/flecue/xai-aaai2022