Before even looking at the list below, we encourage you to google for a recent paper which is of interest to .
Articles of Special Interest
Here are the papers, which we feel are super important and we are highly motivated to go through them as soon as possible.
- Random Synaptic Feedback Weights Support Error Backpropagation for Deep Learning by Timothy P. Lillicrap et al.; 2016
In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron’s axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights.
- Spiking neurons can discover predictive features by aggregate-label learning by Robert Gütig, 2016 in Science
- Bits from Biology for Computational Intelligence by Michael Wibral, Joseph T. Lizier, Viola Priesemann., 2014
- Cortical Learning via Prediction by C. Papadimitriou and S. Vempala, 2015
- On simplicity and complexity in the brave new world of large-scale neuroscience by Peiran Gaoa and Surya Ganguli, 2015
- The Inevitability of Probability: Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback by A. Emin Orhan and Wei Ji Ma, 2016
- Similarity, kernels, and the fundamental constraints on cognition by Reza Shahbazi, Rajeev Raizada, Shimon Edelman, 2016
- A Unified Mathematical Framework for Coding Time, Space, and Sequences in the Hippocampal Region by M. Howard et al., 2015
- Sparseness and Expansion in Sensory Representations by Baktash Babadi & Haim Sompolinsky, 2015
we address the computational benefits of expansion and sparseness for clustered inputs, where different clusters represent behaviorally distinct stimuli and intracluster variability represents sensory or neuronal noise
- Noise as a Resource for Computation and Learning in Networks of Spiking Neurons by Wolfgang Maass, 2014
- Towards a Mathematical Theory of Cortical Micro-circuits by Dileep George & Jeff Hawkins, 2009
- Dynamical models of cortical circuits by Fred Wolf et al., 2014
- Why There Are Complementary Learning Systems in the Hippocampus and Neocortex: Insights From the Successes and Failures of Connectionist Models of Learning and Memory by McClelland et al., 1995
a classic paper about the dual process approach to memory (hippocampus, cortex, their different computations and different time-courses for memory processing). Is a bit more complex and maybe not so easy to read but it is one of the foundations for current neuroscientific and computational thinking about memory. (Cognitive science, neuroscience, computational neuroscience).
- Ruling out and ruling in neural codes by Nirenberg et al., 2009
Also an experimental paper, where the group of Nirenberg studies the neural code with a very clever method - they measure all the input the brain gets from the retina and then use different codes for decoding, which are all compared to the behavior of the animal. They can for example show that at this stage, on retina, the rate code cannot work - it performs much worse than the animal. Easy to understand, great paper. (Computational neuroscience)
- Dimensionality reduction for large-scale neural recordings by John P Cunningham & Byron M Yu, 2014
- Modeling Higher-Order Correlations within Cortical Microcolumns by Urs Köster et al., 2014
- Continuous online sequence learning with an unsupervised neural network model by Yuwei Cui, et al. and Jeff Hawkins, 2015
hierarchical temporal memory (HTM) sequence memory as a theoretical framework for sequence learning in the cortex
Deep Learning and AI
- Memory Transformation Enhances Reinforcement Learning in Dynamic Environments by A. Santoro et al., 2016
- Random synaptic feedback weights support error backpropagation for deep learning, 2016
- Graying the black box: Understanding DQNs by Tom Zahavy, Nir Ben Zrihem, Shie Mannor, 2016
In this paper, we present a methodology and tools to analyze Deep Q-networks (DQNs) in a non-blind matter.
- Asynchronous Methods for Deep Reinforcement Learning by V. Mnih et al., 2016
an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU
- Approximate Hubel-Wiesel Modules and the Data Structures of Neural Computation by Joel Z. Leibo et. al and Demis Hassabis, 2015
Framework for modeling the interface between perception and memory from Google DeepMind
- Deep Residual Learning for Image Recognition by Kaiming He et al., 2015
The winning model of ILSVRC 2015 with a 150-layer deep network.
- Efficient Deep Feature Learning and Extraction via StochasticNets by Mohammad Javad Shafiee et al., 2015
Motivated by findings of stochastic synaptic connectivity formation in the brain as well as the brain's uncanny ability to efficiently represent information, we propose the efficient learning and extraction of features via StochasticNets
- Deep Visual Analogy-Making by Scott Reed et al., 2015
In this paper we develop a novel deep network trained end-to-end to perform visual analogy making, which is the task of transforming a query image according to an example pair of related images. Solving this problem requires both accurately recognizing a visual relationship and generating a transformed query image accordingly.
- Policy Distillation by A. Rusu et al., Google DeepMind, 2015
we present a novel method called policy distillation that can be used to extract the policy of a reinforcement learning agent and train a new network that performs at the expert level while being dramatically smaller and more efficient
- A New Framework for Cortico-Striatal Plasticity: Behavioural Theory Meets In Vitro Data at the Reinforcement-Action Interface by K. Gurney, M. Humphries and P. Redgrave, 2015
- Neural Computations Mediating One-Shot Learning in the Human Brain by Sang Wan Lee, J. O’Doherty and S. Shimojo, 2015
- Hippocampal representation of related and opposing memories develop within distinct, hierarchically-organized neural schemas by Sam McKenzie et al., 2014
- Predicting visual stimuli on the basis of activity in auditory cortices by Kaspar Meyer et al., 2010
- Canonical Microcircuits for Predictive Coding by Andre M. Bastos et al., 2012
- Bayesian Integration in Sensorimotor Learning by Körding & Wolpert, 2004
empirical paper showing how people do implicitly Bayesian statistics by movement control. (Computational neuroscience)
- Spontaneous Cortical Activity Reveals Hallmarks of an Optimal Internal Model of the Environment by Berkes et al., 2011
empirical paper showing that spontaneous activity of the visual cortex might reflect the internal model of the environment. (Neuroscience)
- Learning in cortical networks through error back-propagation by James C.R. Whittington and Rafal Bogacz, 2015
we analyse relationships between the back-propagation algorithm and the predictive coding model of information processing in the cortex
- Mia Xu Chen et al., 2014, Unsupervised Learning by Deep Scattering Contractions
(Neuroscience, Machine Learning)
- Wei Ji Ma, 2012, Organizing probabilistic models of perception,
overview paper which clarifies many misconceptions about Bayesian inference in systems neuroscience