Read by QxMD icon Read

Computational neuroscience

shared collection
43 papers 25 to 100 followers
By Abraham Nunes Psychiatry resident interested in computational neuroscience, forensic psychiatry, and neuropsychiatry.
Yoshua Bengio, Thomas Mesnard, Asja Fischer, Saizheng Zhang, Yuhuai Wu
We show that Langevin Monte Carlo Markov chain inference in an energy-based model with latent variables has the property that the early steps of inference, starting from a stationary point, correspond to propagating error gradients into internal layers, similar to backpropagation. The backpropagated error is with respect to output units that have received an outside driving force pushing them away from the stationary point. Backpropagated error gradients correspond to temporal derivatives with respect to the activation of hidden units...
January 17, 2017: Neural Computation
Spyridon Chavlis, Panagiotis C Petrantonakis, Panayiota Poirazi
The hippocampus plays a key role in pattern separation, the process of transforming similar incoming information to highly dissimilar, nonverlapping representations. Sparse firing granule cells (GCs) in the dentate gyrus (DG) have been proposed to undertake this computation, but little is known about which of their properties influence pattern separation. Dendritic atrophy has been reported in diseases associated with pattern separation deficits, suggesting a possible role for dendrites in this phenomenon. To investigate whether and how the dendrites of GCs contribute to pattern separation, we build a simplified, biologically relevant, computational model of the DG...
January 2017: Hippocampus
Stephanie C Y Chan, Yael Niv, Kenneth A Norman
UNLABELLED: The orbitofrontal cortex (OFC) has been implicated in both the representation of "state," in studies of reinforcement learning and decision making, and also in the representation of "schemas," in studies of episodic memory. Both of these cognitive constructs require a similar inference about the underlying situation or "latent cause" that generates our observations at any given time. The statistically optimal solution to this inference problem is to use Bayes' rule to compute a posterior probability distribution over latent causes...
July 27, 2016: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
Alexander G Huth, Wendy A de Heer, Thomas L Griffiths, Frédéric E Theunissen, Jack L Gallant
The meaning of language is represented in regions of the cerebral cortex collectively known as the 'semantic system'. However, little of the semantic system has been mapped comprehensively, and the semantic selectivity of most regions is unknown. Here we systematically map semantic selectivity across the cortex using voxel-wise modelling of functional MRI (fMRI) data collected while subjects listened to hours of narrative stories. We show that the semantic system is organized into intricate patterns that seem to be consistent across individuals...
April 28, 2016: Nature
Thomas Saaty
This paper is concerned with understanding synthesis of electric signals in the neural system based on making pairwise comparisons. Fundamentally, every person and every animal are born with the talent to compare stimuli from things that share properties in space or over time. Comparisons always need experience to distinguish among things. Pairwise comparisons are numerically reciprocal. If a value is assigned to the larger of two elements that have a given property when compared with the smaller one, then the smaller has the reciprocal of that value when compared with the larger...
February 2017: Neural Networks: the Official Journal of the International Neural Network Society
Karl Friston, Gyorgy Buzsáki
This Opinion article considers the implications for functional anatomy of how we represent temporal structure in our exchanges with the world. It offers a theoretical treatment that tries to make sense of the architectural principles seen in mammalian brains. Specifically, it considers a factorisation between representations of temporal succession and representations of content or, heuristically, a segregation into when and what. This segregation may explain the central role of the hippocampus in neuronal hierarchies while providing a tentative explanation for recent observations of how ordinal sequences are encoded...
July 2016: Trends in Cognitive Sciences
Eva Pool, Vanessa Sennwald, Sylvain Delplanque, Tobias Brosch, David Sander
Animal research has shown it is possible to want a reward that is not liked once obtained. Although these findings have elicited interest, human experiments have produced contradictory results, raising doubts about the existence of separate wanting and liking influences in human reward processing. This discrepancy could be due to inconsistences in the operationalization of these concepts. We systematically reviewed the methodologies used to assess human wanting and/or liking and found that most studies operationalized these concepts in congruency with the animal literature...
April 2016: Neuroscience and Biobehavioral Reviews
Takashi Nakano, Makoto Otsuka, Junichiro Yoshimoto, Kenji Doya
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems...
2015: PloS One
Alexandre Pouget, Jeffrey M Beck, Wei Ji Ma, Peter E Latham
There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks...
September 2013: Nature Neuroscience
Brian Colder
While considerable evidence supports the notion that lower-level interpretation of incoming sensory information is guided by top-down sensory expectations, less is known about the source of the sensory expectations or the mechanisms by which they are spread. Predictive coding theory proposes that sensory expectations flow down from higher-level association areas to lower-level sensory cortex. A separate theory of the role of prediction in cognition describes "emulations" as linked representations of potential actions and their associated expected sensation that are hypothesized to play an important role in many aspects of cognition...
2015: Frontiers in Computational Neuroscience
Henning Schroll, Andreas Horn, Christine Gröschel, Christof Brücke, Götz Lütjens, Gerd-Helge Schneider, Joachim K Krauss, Andrea A Kühn, Fred H Hamker
The ability to learn associations between stimuli, responses and rewards is a prerequisite for survival. Models of reinforcement learning suggest that the striatum, a basal ganglia input nucleus, vitally contributes to these learning processes. Our recently presented computational model predicts, first, that not only the striatum, but also the globus pallidus contributes to the learning (i.e., exploration) of stimulus-response associations based on rewards. Secondly, it predicts that the stable execution (i...
November 15, 2015: NeuroImage
Marcos Economides, Zeb Kurth-Nelson, Annika Lübbert, Marc Guitart-Masip, Raymond J Dolan
Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity...
September 2015: PLoS Computational Biology
Guillaume Viejo, Mehdi Khamassi, Andrea Brovelli, Benoît Girard
Current learning theory provides a comprehensive description of how humans and other animals learn, and places behavioral flexibility and automaticity at heart of adaptive behaviors. However, the computations supporting the interactions between goal-directed and habitual decision-making systems are still poorly understood. Previous functional magnetic resonance imaging (fMRI) results suggest that the brain hosts complementary computations that may differentially support goal-directed and habitual processes in the form of a dynamical interplay rather than a serial recruitment of strategies...
2015: Frontiers in Behavioral Neuroscience
Sareh Zendehrouh
Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment...
November 2015: Neural Networks: the Official Journal of the International Neural Network Society
Xue-Xin Wei, Alan A Stocker
Bayesian observer models provide a principled account of the fact that our perception of the world rarely matches physical reality. The standard explanation is that our percepts are biased toward our prior beliefs. However, reported psychophysical data suggest that this view may be simplistic. We propose a new model formulation based on efficient coding that is fully specified for any given natural stimulus distribution. The model makes two new and seemingly anti-Bayesian predictions. First, it predicts that perception is often biased away from an observer's prior beliefs...
October 2015: Nature Neuroscience
Johan Kwisthout, Iris van Rooij
Contrary to Friston's previous work, this paper describes free energy minimization using categorical probability distributions over discrete states. This alternative mathematical framework exposes a fundamental, yet unnoticed challenge for the free energy principle. When considering discrete state spaces one must specify their granularity, as the amount of information gain is defined over this state space. The more detailed this state space, the lower the precision of the predictions will be, and consequently, the higher the prediction errors...
2015: Cognitive Neuroscience
Russell N James
The solution to the exploration-exploitation dilemma presented essentially subsumes exploitation into an information-maximizing model. Such a single-maximization model is shown to be (1) more tractable than the initial dual-maximization dilemma, (2) useful in modeling information-maximizing subsystems, and (3) profitably applied in artificial simulations where exploration is costless. However, the model fails to resolve the dilemma in ethological or practical circumstances with objective outcomes, such as inclusive fitness, rather than information outcomes, such as lack of surprise...
2015: Cognitive Neuroscience
Yanping Huang, Rajesh P N Rao
Predictive coding is a unifying framework for understanding redundancy reduction and efficient coding in the nervous system. By transmitting only the unpredicted portions of an incoming sensory signal, predictive coding allows the nervous system to reduce redundancy and make full use of the limited dynamic range of neurons. Starting with the hypothesis of efficient coding as a design principle in the sensory system, predictive coding provides a functional explanation for a range of neural responses and many aspects of brain organization...
September 2011: Wiley Interdisciplinary Reviews. Cognitive Science
Geoffrey Hinton
It is possible to learn multiple layers of non-linear features by backpropagating error derivatives through a feedforward neural network. This is a very effective learning procedure when there is a huge amount of labeled training data, but for many learning tasks very few labeled examples are available. In an effort to overcome the need for labeled data, several different generative models were developed that learned interesting features by modeling the higher order statistical structure of a set of input vectors...
August 2014: Cognitive Science
Neir Eshel, Michael Bukwich, Vinod Rao, Vivian Hemmelder, Ju Tian, Naoshige Uchida
Dopamine neurons are thought to facilitate learning by comparing actual and expected reward. Despite two decades of investigation, little is known about how this comparison is made. To determine how dopamine neurons calculate prediction error, we combined optogenetic manipulations with extracellular recordings in the ventral tegmental area while mice engaged in classical conditioning. Here we demonstrate, by manipulating the temporal expectation of reward, that dopamine neurons perform subtraction, a computation that is ideal for reinforcement learning but rarely observed in the brain...
September 10, 2015: Nature
2015-09-05 01:48:04
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"