Each student is encouraged to present a paper of her/his own interest. Just inform us by dropping an email to make sure that the paper is relevant to the seminar. In case you do not have any favourite paper in mind, you can select one from the list below.
Articles of Special Interest
Here are a few papers directly related to our ongoing projects that have to be reviewed as soon as possible.
- Theory of Minds: Understanding Behavior in Groups Through Inverse Planning Michael by Shum et al. 2019.
- Preferences Implicit in the State of the World by Shah et al. 2019.
- Modeling the Neural Mechanisms of Core Object Recognition
- Mental Labour
- Simulating mirror neurons
- The Neural Basis of Timing: Distributed Mechanisms for Diverse Functions
- Open-ended Learning in Symmetric Zero-sum Games by Balduzzi et al. 2019.
- InfoBot: Transfer and Exploration via the Information Bottleneck by Goyal et al. 2019.
- The Hanabi Challenge: A New Frontier for AI Research by Bard et al. 2019
- Learning Plannable Representations with Causal InfoGAN by Kurutach et al. 2018.
- Counterfactual Multi-Agent Policy Gradients by Forester et al. 2017.
- Can reinforcement learning explain the development of causal inference in multisensory integration? by Weisswange et al. 2009.
- Growing a social brain by Atzil et al. 2018.
- Neural dynamics at successive stages of the ventral visual stream are consistent with hierarchical error signals by Elias B. Issa, View ORCID ProfileCharles F. Cadieu, View ORCID ProfileJames J. DiCarlo. April-2018
- Predicting visual stimuli on the basis of activity in auditory cortices by Kaspar Meyer et al., 2010
- Canonical Microcircuits for Predictive Coding by Andre M. Bastos et al., 2012
- Bayesian Integration in Sensorimotor Learning by Körding & Wolpert, 2004
empirical paper showing how people do implicitly Bayesian statistics by movement control. (Computational neuroscience)
Neuroscience and Deep Learning
- See, feel, act: Hierarchical learning for complex manipulation skills with multisensory fusion by Fazeli et al. 2019.
- Towards deep learning with segregated dendrites by Jordan Guerguiev Timothy P Lillicrap Blake A Richards;2017
- Searching for Principles of Brain Computation by Wolfgang Maass; 2016
- Learning in cortical networks through error back-propagation by James C.R. Whittington and Rafal Bogacz, 2015
we analyse relationships between the back-propagation algorithm and the predictive coding model of information processing in the cortex
- Vector-based navigation using grid-like representations in artificial agents by Banino et al.
- Toward an Integration of Deep Learning and Neuroscience by Adam H. Marblestone, Greg Wayne and Konrad P.
- The Successor Representation: Its Computational Logic and Neural Substrates by Samuel J. Gershman.
- Prefrontal cortex as a meta-reinforcement learning system by Jane X. Wang, Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, Demis Hassabis and Matthew Botvinick.
- Training Neural Networks with Local Error Signals by Nøkland et al. 2019.
- Disentangling causal webs in the brain using functional magnetic resonance imaging: A review of current approaches by Natalia Z. Bielczyk, Sebo Uithol, Tim van Mourik, Paul Anderson, Jeffrey C. Glennon and Jan K. Buitelaar, June 2018.
- A cerebellar mechanism for learning prior distributions of time intervals by Devika Narain, Evan D. Remington, Chris I. De Zeeuw & Mehrdad Jazayeri;2017
- Flexible timing by temporal scaling of cortical responses by Jing Wang, Devika Narain, Eghbal A. Hosseini & Mehrdad Jazayeri;2017
- Dendritic error backpropagation in deep cortical microcircuits by João Sacramento, Rui Ponte Costa, Yoshua Bengio, Walter Senn;2017
- Spiking neurons can discover predictive features by aggregate-label learning by Robert Gütig, 2016 in Science
- Cortical Learning via Prediction by C. Papadimitriou and S. Vempala, 2015
- The Inevitability of Probability: Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback by A. Emin Orhan and Wei Ji Ma, 2016
- Noise as a Resource for Computation and Learning in Networks of Spiking Neurons by Wolfgang Maass, 2014
- Dynamical models of cortical circuits by Fred Wolf et al., 2014
- Why There Are Complementary Learning Systems in the Hippocampus and Neocortex: Insights From the Successes and Failures of Connectionist Models of Learning and Memory by McClelland et al., 1995
a classic paper about the dual process approach to memory (hippocampus, cortex, their different computations and different time-courses for memory processing). Is a bit more complex and maybe not so easy to read but it is one of the foundations for current neuroscientific and computational thinking about memory. (Cognitive science, neuroscience, computational neuroscience).
Pure Deep Learning and AI
- MONet: Unsupervised Scene Decomposition and Representation by Burgess et al. 2019
- Causal Confusion in Imitation Learning by De Haan et al. 2018.
- Learning to Decompose and Disentangle Representations for Video Prediction by Hsieh et al. 2018.
- Casual reasoning from Meta-reinforcement learning by Dasgupta et al. 2019.
- A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms by Bengio et al. 2019.
- Modeling Others using Oneself in Multi-Agent Reinforcement Learning by Roberta Raileanu, Emily Denton, Arthur Szlam and Rob Fergus.
- Attention Is All You Need by Vaswani et al. 2017.
- InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets by OpenAI; 2016
- Dense Associative Memory is Robust to Adversarial Inputs by Dmitry Krotov, John J Hopfield, 2016
- Mechanics of n-Player differential games
- Efficient Deep Feature Learning and Extraction via Stochastic Nets by Mohammad Javad Shafiee et al., 2015
Motivated by findings of stochastic synaptic connectivity formation in the brain as well as the brain's uncanny ability to efficiently represent information, we propose the efficient learning and extraction of features via StochasticNets
- Policy Distillation by A. Rusu et al., Google DeepMind, 2015
we present a novel method called policy distillation that can be used to extract the policy of a reinforcement learning agent and train a new network that performs at the expert level while being dramatically smaller and more efficient
- [[http://papersdb.cs.ualberta.ca/~papersdb/uploaded_files/paper_p160-sutton.pdf.stjohn|Dyna, an Integrated Architecture for Learning, Planning, and Reacting] by Sutton et al. 1991.
Deep Learning Basic Algorithms
- Learning Internal Representations by Error Propagation by Rumelhart, David E ; Hinton, Geoffrey E ; Williams, Ronald J;1986
- Long Short-Term Memory by Sepp Hochreiter and Jürgen Schmidhuber;1997
- Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning by Ronald J. Williams;1992
- Q-learning by Christopher J. C. H. Watkins, Peter Dayan;1992
- Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Visual Pattern Recognition by Kunihiko Fukushima, Sei Miyake;1982
- Wei Ji Ma, 2012, Organizing probabilistic models of perception,
overview paper which clarifies many misconceptions about Bayesian inference in systems neuroscience