Read by QxMD icon Read

samuel gershman

Francisco Pereira, Bin Lou, Brianna Pritchett, Samuel Ritter, Samuel J Gershman, Nancy Kanwisher, Matthew Botvinick, Evelina Fedorenko
Prior work decoding linguistic meaning from imaging data has been largely limited to concrete nouns, using similar stimuli for training and testing, from a relatively small number of semantic categories. Here we present a new approach for building a brain decoding system in which words and sentences are represented as vectors in a semantic space constructed from massive text corpora. By efficiently sampling this space to select training stimuli shown to subjects, we maximize the ability to generalize to new meanings from limited imaging data...
March 6, 2018: Nature Communications
Samuel J Gershman
The dilemma between information gathering (exploration) and reward seeking (exploitation) is a fundamental problem for reinforcement learning agents. How humans resolve this dilemma is still an open question, because experiments have provided equivocal evidence about the underlying algorithms used by humans. We show that two families of algorithms can be distinguished in terms of how uncertainty affects exploration. Algorithms based on uncertainty bonuses predict a change in response bias as a function of uncertainty, whereas algorithms based on sampling predict a change in response slope...
December 28, 2017: Cognition
Alexander J Millner, Samuel J Gershman, Matthew K Nock, Hanneke E M den Ouden
To survive in complex environments, animals need to have mechanisms to select effective actions quickly, with minimal computational costs. As perhaps the computationally most parsimonious of these systems, Pavlovian control accomplishes this by hardwiring specific stereotyped responses to certain classes of stimuli. It is well documented that appetitive cues initiate a Pavlovian bias toward vigorous approach; however, Pavlovian responses to aversive stimuli are less well understood. Gaining a deeper understanding of aversive Pavlovian responses, such as active avoidance, is important given the critical role these behaviors play in several psychiatric conditions...
December 15, 2017: Journal of Cognitive Neuroscience
Tommy C Blanchard, Samuel J Gershman
Balancing exploration and exploitation is a fundamental problem in reinforcement learning. Previous neuroimaging studies of the exploration-exploitation dilemma could not completely disentangle these two processes, making it difficult to unambiguously identify their neural signatures. We overcome this problem using a task in which subjects can either observe (pure exploration) or bet (pure exploitation). Insula and dorsal anterior cingulate cortex showed significantly greater activity on observe trials compared to bet trials, suggesting that these regions play a role in driving exploration...
February 2018: Cognitive, Affective & Behavioral Neuroscience
Eric Schulz, Joshua B Tenenbaum, David Duvenaud, Maarten Speekenbrink, Samuel J Gershman
How do people recognize and learn about complex functional structure? Taking inspiration from other areas of cognitive science, we propose that this is achieved by harnessing compositionality: complex structure is decomposed into simpler building blocks. We formalize this idea within the framework of Bayesian regression using a grammar over Gaussian process kernels, and compare this approach with other structure learning approaches. Participants consistently chose compositional (over non-compositional) extrapolations and interpolations of functions...
December 2017: Cognitive Psychology
Kimberly L Stachenfeld, Matthew M Botvinick, Samuel J Gershman
A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation...
November 2017: Nature Neuroscience
Samuel J Gershman
The hypothesis that the phasic dopamine response reports a reward prediction error has become deeply entrenched. However, dopamine neurons exhibit several notable deviations from this hypothesis. A coherent explanation for these deviations can be obtained by analyzing the dopamine response in terms of Bayesian reinforcement learning. The key idea is that prediction errors are modulated by probabilistic beliefs about the relationship between cues and outcomes, updated through Bayesian inference. This account can explain dopamine responses to inferred value in sensory preconditioning, the effects of cue preexposure (latent inhibition), and adaptive coding of prediction errors when rewards vary across orders of magnitude...
December 2017: Neural Computation
Evan M Russek, Ida Momennejad, Matthew M Botvinick, Samuel J Gershman, Nathaniel D Daw
Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning...
September 2017: PLoS Computational Biology
Samuel J Gershman
Rational analyses of memory suggest that retrievability of past experience depends on its usefulness for predicting the future: memory is adapted to the temporal structure of the environment. Recent research has enriched this view by applying it to semantic memory and reinforcement learning. This paper describes how multiple forms of memory can be linked via common predictive principles, possibly subserved by a shared neural substrate in the hippocampus. Predictive principles offer an explanation for a wide range of behavioral and neural phenomena, including semantic fluency, temporal contiguity effects in episodic memory, and the topological properties of hippocampal place cells...
October 2017: Current Opinion in Behavioral Sciences
Scott W Linderman, Samuel J Gershman
Computational neuroscience is, to first order, dominated by two approaches: the 'bottom-up' approach, which searches for statistical patterns in large-scale neural recordings, and the 'top-down' approach, which begins with a theory of computation and considers plausible neural implementations. While this division is not clear-cut, we argue that these approaches should be much more intimately linked. From a Bayesian perspective, computational theories provide constrained prior distributions on neural data-albeit highly sophisticated ones...
October 2017: Current Opinion in Neurobiology
Wouter Kool, Samuel J Gershman, Fiery A Cushman
Human behavior is sometimes determined by habit and other times by goal-directed planning. Modern reinforcement-learning theories formalize this distinction as a competition between a computationally cheap but inaccurate model-free system that gives rise to habits and a computationally expensive but accurate model-based system that implements planning. It is unclear, however, how people choose to allocate control between these systems. Here, we propose that arbitration occurs by comparing each system's task-specific costs and benefits...
July 1, 2017: Psychological Science
Samuel J Gershman, Jimmy Zhou, Cody Kommers
Imagination enables us not only to transcend reality but also to learn about it. In the context of reinforcement learning, an agent can rationally update its value estimates by simulating an internal model of the environment, provided that the model is accurate. In a series of sequential decision-making experiments, we investigated the impact of imaginative simulation on subsequent decisions. We found that imagination can cause people to pursue imagined paths, even when these paths are suboptimal. This bias is systematically related to participants' optimism about how much reward they expect to receive along imagined paths; providing feedback strongly attenuates the effect...
December 2017: Journal of Cognitive Neuroscience
Ishita Dasgupta, Eric Schulz, Samuel J Gershman
Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? In particular, why do humans make near-rational inferences in some natural domains where the candidate hypotheses are explicitly available, whereas tasks in similar domains requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes' rule. Specifically in our account, hypotheses are generated stochastically from a sampling process, such that the sampled hypotheses form a Monte Carlo approximation of the posterior...
August 2017: Cognitive Psychology
Samuel J Gershman, Marie-H Monfils, Kenneth A Norman, Yael Niv
No abstract text is available yet for this article.
May 22, 2017: ELife
Samuel J Gershman, Marie-H Monfils, Kenneth A Norman, Yael Niv
Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations...
March 15, 2017: ELife
Samuel J Gershman, Hillard Thomas Pouncy, Hyowon Gweon
We routinely observe others' choices and use them to guide our own. Whose choices influence us more, and why? Prior work has focused on the effect of perceived similarity between two individuals (self and others), such as the degree of overlap in past choices or explicitly recognizable group affiliations. In the real world, however, any dyadic relationship is part of a more complex social structure involving multiple social groups that are not directly observable. Here we suggest that human learners go beyond dyadic similarities in choice behaviors or explicit group memberships; they infer the structure of social influence by grouping individuals (including themselves) based on choices, and they use these groups to decide whose choices to follow...
April 2017: Cognitive Science
Clara Kwon Starkweather, Benedicte M Babayan, Naoshige Uchida, Samuel J Gershman
Midbrain dopamine neurons signal reward prediction error (RPE), or actual minus expected reward. The temporal difference (TD) learning model has been a cornerstone in understanding how dopamine RPEs could drive associative learning. Classically, TD learning imparts value to features that serially track elapsed time relative to observable stimuli. In the real world, however, sensory stimuli provide ambiguous information about the hidden state of the environment, leading to the proposal that TD learning might instead compute a value signal based on an inferred distribution of hidden states (a 'belief state')...
April 2017: Nature Neuroscience
Mina Cikara, Samuel J Gershman
How does the brain infer social status? A new study by Kumaran et al. (2016) identifies a region of the medial prefrontal cortex that, in concert with the amygdala and hippocampus, subserves updating of probabilistic beliefs about the status of individuals in a social hierarchy.
December 7, 2016: Neuron
Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, Samuel J Gershman
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it...
November 24, 2016: Behavioral and Brain Sciences
Francisco Pereira, Samuel Gershman, Samuel Ritter, Matthew Botvinick
In this paper we carry out an extensive comparison of many off-the-shelf distributed semantic vectors representations of words, for the purpose of making predictions about behavioural results or human annotations of data. In doing this comparison we also provide a guide for how vector similarity computations can be used to make such predictions, and introduce many resources available both in terms of datasets and of vector representations. Finally, we discuss the shortcomings of this approach and future research directions that might address them...
May 2016: Cognitive Neuropsychology
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"