## Papers

Articles of Special Interest

Here are the papers directly related to our ongoing projects that have to be reviewed as soon as possible.

empty

Neuroscience and Deep Learning

**Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights**by A. Samadi, et al., 2017**Searching for Principles of Brain Computation**by Wolfgang Maass; 2016**Evidence that the ventral stream codes the errors used in hierarchical inference and learning**by Issa, Cadieu, DiCarlo; 2016**Random Synaptic Feedback Weights Support Error Backpropagation for Deep Learning**by Timothy P. Lillicrap et al.; 2016*In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron’s axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights.*

**Continuous online sequence learning with an unsupervised neural network model**by Yuwei Cui, et al. and Jeff Hawkins, 2015*hierarchical temporal memory (HTM) sequence memory as a theoretical framework for sequence learning in the cortex***Approximate Hubel-Wiesel Modules and the Data Structures of Neural Computation**by Joel Z. Leibo et. al and Demis Hassabis, 2015*Framework for modeling the interface between perception and memory from Google DeepMind***Learning in cortical networks through error back-propagation**by James C.R. Whittington and Rafal Bogacz, 2015*we analyse relationships between the back-propagation algorithm and the predictive coding model of information processing in the cortex*

Computational Neuroscience

**Spiking neurons can discover predictive features by aggregate-label learning**by Robert Gütig, 2016 in Science**Cortical Learning via Prediction**by C. Papadimitriou and S. Vempala, 2015**The Inevitability of Probability: Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback**by A. Emin Orhan and Wei Ji Ma, 2016**A Unified Mathematical Framework for Coding Time, Space, and Sequences in the Hippocampal Region**by M. Howard et al., 2015**Sparseness and Expansion in Sensory Representations**by Baktash Babadi & Haim Sompolinsky, 2015*we address the computational benefits of expansion and sparseness for clustered inputs, where different clusters represent behaviorally distinct stimuli and intracluster variability represents sensory or neuronal noise***Noise as a Resource for Computation and Learning in Networks of Spiking Neurons**by Wolfgang Maass, 2014**Towards a Mathematical Theory of Cortical Micro-circuits**by Dileep George & Jeff Hawkins, 2009**Dynamical models of cortical circuits**by Fred Wolf et al., 2014**Why There Are Complementary Learning Systems in the Hippocampus and Neocortex: Insights From the Successes and Failures of Connectionist Models of Learning and Memory**by McClelland et al., 1995*a classic paper about the dual process approach to memory (hippocampus, cortex, their different computations and different time-courses for memory processing). Is a bit more complex and maybe not so easy to read but it is one of the foundations for current neuroscientific and computational thinking about memory. (Cognitive science, neuroscience, computational neuroscience).***Ruling out and ruling in neural codes**by Nirenberg et al., 2009*Also an experimental paper, where the group of Nirenberg studies the neural code with a very clever method - they measure all the input the brain gets from the retina and then use different codes for decoding, which are all compared to the behavior of the animal. They can for example show that at this stage, on retina, the rate code cannot work - it performs much worse than the animal. Easy to understand, great paper. (Computational neuroscience)***Dimensionality reduction for large-scale neural recordings**by John P Cunningham & Byron M Yu, 2014

Pure Deep Learning and AI

**InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets**by OpenAI; 2016**Dense Associative Memory is Robust to Adversarial Inputs**by Dmitry Krotov, John J Hopfield, 2016**Graying the black box: Understanding DQNs**by Tom Zahavy, Nir Ben Zrihem, Shie Mannor, 2016*In this paper, we present a methodology and tools to analyze Deep Q-networks (DQNs) in a non-blind matter.***Efficient Deep Feature Learning and Extraction via Stochastic Nets**by Mohammad Javad Shafiee et al., 2015*Motivated by findings of stochastic synaptic connectivity formation in the brain as well as the brain's uncanny ability to efficiently represent information, we propose the efficient learning and extraction of features via StochasticNets***Policy Distillation**by A. Rusu et al., Google DeepMind, 2015*we present a novel method called policy distillation that can be used to extract the policy of a reinforcement learning agent and train a new network that performs at the expert level while being dramatically smaller and more efficient*

Neuroscience

**A New Framework for Cortico-Striatal Plasticity: Behavioural Theory Meets In Vitro Data at the Reinforcement-Action Interface**by K. Gurney, M. Humphries and P. Redgrave, 2015**Neural Computations Mediating One-Shot Learning in the Human Brain**by Sang Wan Lee, J. O’Doherty and S. Shimojo, 2015**Hippocampal representation of related and opposing memories develop within distinct, hierarchically-organized neural schemas**by Sam McKenzie et al., 2014**Predicting visual stimuli on the basis of activity in auditory cortices**by Kaspar Meyer et al., 2010**Canonical Microcircuits for Predictive Coding**by Andre M. Bastos et al., 2012**Bayesian Integration in Sensorimotor Learning**by Körding & Wolpert, 2004*empirical paper showing how people do implicitly Bayesian statistics by movement control. (Computational neuroscience)***TAKEN****Spontaneous Cortical Activity Reveals Hallmarks of an Optimal Internal Model of the Environment**by Berkes et al., 2011*empirical paper showing that spontaneous activity of the visual cortex might reflect the internal model of the environment. (Neuroscience)*

Uncategorized

- Mia Xu Chen et al., 2014,
**Unsupervised Learning by Deep Scattering Contractions**

(Neuroscience, Machine Learning) - Wei Ji Ma, 2012,
**Organizing probabilistic models of perception**,

overview paper which clarifies many misconceptions about Bayesian inference in systems neuroscience

(Cognitive neuroscience)