Papers
Articles of Special Interest
Here are the papers directly related to our ongoing projects that have to be reviewed as soon as possible.
empty
Neuroscience
- (Booked)Efficient inverse graphics in biological face processing by Ilker Yildirim, Winrich Freiwald, Joshua Tenenbaum. April-2018
- Neural dynamics at successive stages of the ventral visual stream are consistent with hierarchical error signals by Elias B. Issa, View ORCID ProfileCharles F. Cadieu, View ORCID ProfileJames J. DiCarlo. April-2018
- A Perceptual Inference Mechanism for Hallucinations Linked to Striatal Dopamine by Clifford M. Cassidy, Peter D. Balsam, Jodi J. Weinstein, Rachel J. Rosengard, Mark Slifstein, Nathaniel D. Daw, Anissa Abi-Dargham, Guillermo Horga;2018
- A New Framework for Cortico-Striatal Plasticity: Behavioural Theory Meets In Vitro Data at the Reinforcement-Action Interface by K. Gurney, M. Humphries and P. Redgrave, 2015
- Neural Computations Mediating One-Shot Learning in the Human Brain by Sang Wan Lee, J. O’Doherty and S. Shimojo, 2015
- Predicting visual stimuli on the basis of activity in auditory cortices by Kaspar Meyer et al., 2010
- Canonical Microcircuits for Predictive Coding by Andre M. Bastos et al., 2012
- Bayesian Integration in Sensorimotor Learning by Körding & Wolpert, 2004
empirical paper showing how people do implicitly Bayesian statistics by movement control. (Computational neuroscience)
Neuroscience and Deep Learning
- Towards deep learning with segregated dendrites by Jordan Guerguiev Timothy P Lillicrap Blake A Richards;2017
- (Booked)Searching for Principles of Brain Computation by Wolfgang Maass; 2016
- Evidence that the ventral stream codes the errors used in hierarchical inference and learning by Issa, Cadieu, DiCarlo; 2016
- (Booked)Random Synaptic Feedback Weights Support Error Backpropagation for Deep Learning by Timothy P. Lillicrap et al.; 2016
In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron’s axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights.
- (Booked)Continuous online sequence learning with an unsupervised neural network model by Yuwei Cui, et al. and Jeff Hawkins, 2015
hierarchical temporal memory (HTM) sequence memory as a theoretical framework for sequence learning in the cortex - Approximate Hubel-Wiesel Modules and the Data Structures of Neural Computation by Joel Z. Leibo et. al and Demis Hassabis, 2015
Framework for modeling the interface between perception and memory from Google DeepMind - Learning in cortical networks through error back-propagation by James C.R. Whittington and Rafal Bogacz, 2015
we analyse relationships between the back-propagation algorithm and the predictive coding model of information processing in the cortex
Computational Neuroscience
- A cerebellar mechanism for learning prior distributions of time intervals by Devika Narain, Evan D. Remington, Chris I. De Zeeuw & Mehrdad Jazayeri;2017
- Flexible timing by temporal scaling of cortical responses by Jing Wang, Devika Narain, Eghbal A. Hosseini & Mehrdad Jazayeri;2017
- A Perceptual Inference Mechanism for Hallucinations Linked to Striatal Dopamine by Clifford M. Cassidy, Peter D. Balsam, Jodi J. Weinstein, Rachel J. Rosengard, Mark Slifstein, Nathaniel D. Daw, Anissa Abi-Dargham, Guillermo Horga;2018
- A neural algorithm for a fundamental computing problem by Sanjoy Dasgupta1, Charles F. Stevens2,3, Saket Navlakha4;2017
- Focused learning promotes continual task performance in humans by Timo Flesch, Jan Balaguer, Ronald Dekker, Hamed Nili, Christopher Summerfield;2018
- Dendritic error backpropagation in deep cortical microcircuits by João Sacramento, Rui Ponte Costa, Yoshua Bengio, Walter Senn;2017
- Spiking neurons can discover predictive features by aggregate-label learning by Robert Gütig, 2016 in Science
- Cortical Learning via Prediction by C. Papadimitriou and S. Vempala, 2015
- The Inevitability of Probability: Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback by A. Emin Orhan and Wei Ji Ma, 2016
- A Unified Mathematical Framework for Coding Time, Space, and Sequences in the Hippocampal Region by M. Howard et al., 2015
- Sparseness and Expansion in Sensory Representations by Baktash Babadi & Haim Sompolinsky, 2015
we address the computational benefits of expansion and sparseness for clustered inputs, where different clusters represent behaviorally distinct stimuli and intracluster variability represents sensory or neuronal noise
and https://elifesciences.org/articles/22901 Towards deep learning with segregated dendrites Jordan Guerguiev Timothy P Lillicrap Blake A Richards
- Noise as a Resource for Computation and Learning in Networks of Spiking Neurons by Wolfgang Maass, 2014
- Towards a Mathematical Theory of Cortical Micro-circuits by Dileep George & Jeff Hawkins, 2009
- Dynamical models of cortical circuits by Fred Wolf et al., 2014
- Why There Are Complementary Learning Systems in the Hippocampus and Neocortex: Insights From the Successes and Failures of Connectionist Models of Learning and Memory by McClelland et al., 1995
a classic paper about the dual process approach to memory (hippocampus, cortex, their different computations and different time-courses for memory processing). Is a bit more complex and maybe not so easy to read but it is one of the foundations for current neuroscientific and computational thinking about memory. (Cognitive science, neuroscience, computational neuroscience). - Ruling out and ruling in neural codes by Nirenberg et al., 2009
Also an experimental paper, where the group of Nirenberg studies the neural code with a very clever method - they measure all the input the brain gets from the retina and then use different codes for decoding, which are all compared to the behavior of the animal. They can for example show that at this stage, on retina, the rate code cannot work - it performs much worse than the animal. Easy to understand, great paper. (Computational neuroscience)
Pure Deep Learning and AI
- (Booked)Diversity is All You Need: Learning Skills without a Reward Function by Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, Sergey Levine ;2018
- Not-So-CLEVR: Visual Relations Strain Feedforward Neural Networks by Matthew Ricci, Junkyung Kim, Thomas Serre;2018
- (Booked) IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures by Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, Koray Kavukcuoglu;2018
- (Booked)InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets by OpenAI; 2016
- Dense Associative Memory is Robust to Adversarial Inputs by Dmitry Krotov, John J Hopfield, 2016
- Efficient Deep Feature Learning and Extraction via Stochastic Nets by Mohammad Javad Shafiee et al., 2015
Motivated by findings of stochastic synaptic connectivity formation in the brain as well as the brain's uncanny ability to efficiently represent information, we propose the efficient learning and extraction of features via StochasticNets - Policy Distillation by A. Rusu et al., Google DeepMind, 2015
we present a novel method called policy distillation that can be used to extract the policy of a reinforcement learning agent and train a new network that performs at the expert level while being dramatically smaller and more efficient
Deep Learning Basic Algorithms
- Learning Internal Representations by Error Propagation by Rumelhart, David E ; Hinton, Geoffrey E ; Williams, Ronald J;1986
- Backpropagation Applied to Handwritten Zip Code Recognition by Y. LeCun, B. Boser, J. S. Denker, D. Henderson;1989
- Gradient-based learning applied to document recognition by Y. Lecun ; L. Bottou ; Y. Bengio ; P. Haffner;1998
- Long Short-Term Memory by Sepp Hochreiter and Jürgen Schmidhuber;1997
- Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning by Ronald J. Williams;1992
- Q-learning by Christopher J. C. H. Watkins, Peter Dayan;1992
- Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Visual Pattern Recognition by Kunihiko Fukushima, Sei Miyake;1982
Uncategorized
- Mia Xu Chen et al., 2014, Unsupervised Learning by Deep Scattering Contractions
(Neuroscience, Machine Learning) - Wei Ji Ma, 2012, Organizing probabilistic models of perception,
overview paper which clarifies many misconceptions about Bayesian inference in systems neuroscience
(Cognitive neuroscience)