Papers
Each student is encouraged to present a paper of her/his own interest. Just inform us by dropping an email to make sure that the paper is relevant to the seminar. In case you do not have any favorite paper in mind, you can select one from the list below.
Articles of Special Interest
Here are a few papers directly related to our ongoing projects that have to be reviewed as soon as possible.
- Modeling the Neural Mechanisms of Core Object Recognition
- Measuring abstract reasoning in neural networks
- Mental Labour
- Simulating mirror neurons
- The Neural Basis of Timing: Distributed Mechanisms for Diverse Functions
Neuroscience
- Neural dynamics at successive stages of the ventral visual stream are consistent with hierarchical error signals by Elias B. Issa, View ORCID ProfileCharles F. Cadieu, View ORCID ProfileJames J. DiCarlo. April-2018
- Neural Computations Mediating One-Shot Learning in the Human Brain by Sang Wan Lee, J. O’Doherty and S. Shimojo, 2015
- Predicting visual stimuli on the basis of activity in auditory cortices by Kaspar Meyer et al., 2010
- Canonical Microcircuits for Predictive Coding by Andre M. Bastos et al., 2012
- Bayesian Integration in Sensorimotor Learning by Körding & Wolpert, 2004
empirical paper showing how people do implicitly Bayesian statistics by movement control. (Computational neuroscience)
Neuroscience and Deep Learning
- Towards deep learning with segregated dendrites by Jordan Guerguiev Timothy P Lillicrap Blake A Richards;2017
- Searching for Principles of Brain Computation by Wolfgang Maass; 2016
- (Booked)Neuroscience-Inspired artificial intelligence Hassabis et al.
- Encoding Spatial Relations from Natural Language
- Learning in cortical networks through error back-propagation by James C.R. Whittington and Rafal Bogacz, 2015
we analyse relationships between the back-propagation algorithm and the predictive coding model of information processing in the cortex
Computational Neuroscience
- A cerebellar mechanism for learning prior distributions of time intervals by Devika Narain, Evan D. Remington, Chris I. De Zeeuw & Mehrdad Jazayeri;2017
- Flexible timing by temporal scaling of cortical responses by Jing Wang, Devika Narain, Eghbal A. Hosseini & Mehrdad Jazayeri;2017
- A neural algorithm for a fundamental computing problem by Sanjoy Dasgupta1, Charles F. Stevens2,3, Saket Navlakha4;2017
- Focused learning promotes continual task performance in humans by Timo Flesch, Jan Balaguer, Ronald Dekker, Hamed Nili, Christopher Summerfield;2018
- Dendritic error backpropagation in deep cortical microcircuits by João Sacramento, Rui Ponte Costa, Yoshua Bengio, Walter Senn;2017
- Spiking neurons can discover predictive features by aggregate-label learning by Robert Gütig, 2016 in Science
- Cortical Learning via Prediction by C. Papadimitriou and S. Vempala, 2015
- The Inevitability of Probability: Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback by A. Emin Orhan and Wei Ji Ma, 2016
- Noise as a Resource for Computation and Learning in Networks of Spiking Neurons by Wolfgang Maass, 2014
- Dynamical models of cortical circuits by Fred Wolf et al., 2014
- Why There Are Complementary Learning Systems in the Hippocampus and Neocortex: Insights From the Successes and Failures of Connectionist Models of Learning and Memory by McClelland et al., 1995
a classic paper about the dual process approach to memory (hippocampus, cortex, their different computations and different time-courses for memory processing). Is a bit more complex and maybe not so easy to read but it is one of the foundations for current neuroscientific and computational thinking about memory. (Cognitive science, neuroscience, computational neuroscience).
Pure Deep Learning and AI
- InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets by OpenAI; 2016
- Dense Associative Memory is Robust to Adversarial Inputs by Dmitry Krotov, John J Hopfield, 2016
- Mechanics of n-Player differential games
- Efficient Deep Feature Learning and Extraction via Stochastic Nets by Mohammad Javad Shafiee et al., 2015
Motivated by findings of stochastic synaptic connectivity formation in the brain as well as the brain's uncanny ability to efficiently represent information, we propose the efficient learning and extraction of features via StochasticNets - Policy Distillation by A. Rusu et al., Google DeepMind, 2015
we present a novel method called policy distillation that can be used to extract the policy of a reinforcement learning agent and train a new network that performs at the expert level while being dramatically smaller and more efficient
Deep Learning Basic Algorithms
- Learning Internal Representations by Error Propagation by Rumelhart, David E ; Hinton, Geoffrey E ; Williams, Ronald J;1986
- Long Short-Term Memory by Sepp Hochreiter and Jürgen Schmidhuber;1997
- Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning by Ronald J. Williams;1992
- Q-learning by Christopher J. C. H. Watkins, Peter Dayan;1992
- Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Visual Pattern Recognition by Kunihiko Fukushima, Sei Miyake;1982
Uncategorized
- Wei Ji Ma, 2012, Organizing probabilistic models of perception,
overview paper which clarifies many misconceptions about Bayesian inference in systems neuroscience
(Cognitive neuroscience)