Read by QxMD icon Read

samuel gershman

Samuel J Gershman
Rational analyses of memory suggest that retrievability of past experience depends on its usefulness for predicting the future: memory is adapted to the temporal structure of the environment. Recent research has enriched this view by applying it to semantic memory and reinforcement learning. This paper describes how multiple forms of memory can be linked via common predictive principles, possibly subserved by a shared neural substrate in the hippocampus. Predictive principles offer an explanation for a wide range of behavioral and neural phenomena, including semantic fluency, temporal contiguity effects in episodic memory, and the topological properties of hippocampal place cells...
October 2017: Current Opinion in Behavioral Sciences
Scott W Linderman, Samuel J Gershman
Computational neuroscience is, to first order, dominated by two approaches: the 'bottom-up' approach, which searches for statistical patterns in large-scale neural recordings, and the 'top-down' approach, which begins with a theory of computation and considers plausible neural implementations. While this division is not clear-cut, we argue that these approaches should be much more intimately linked. From a Bayesian perspective, computational theories provide constrained prior distributions on neural data-albeit highly sophisticated ones...
July 18, 2017: Current Opinion in Neurobiology
Wouter Kool, Samuel J Gershman, Fiery A Cushman
Human behavior is sometimes determined by habit and other times by goal-directed planning. Modern reinforcement-learning theories formalize this distinction as a competition between a computationally cheap but inaccurate model-free system that gives rise to habits and a computationally expensive but accurate model-based system that implements planning. It is unclear, however, how people choose to allocate control between these systems. Here, we propose that arbitration occurs by comparing each system's task-specific costs and benefits...
July 1, 2017: Psychological Science
Samuel J Gershman, Jimmy Zhou, Cody Kommers
Imagination enables us not only to transcend reality but also to learn about it. In the context of reinforcement learning, an agent can rationally update its value estimates by simulating an internal model of the environment, provided that the model is accurate. In a series of sequential decision-making experiments, we investigated the impact of imaginative simulation on subsequent decisions. We found that imagination can cause people to pursue imagined paths, even when these paths are suboptimal. This bias is systematically related to participants' optimism about how much reward they expect to receive along imagined paths; providing feedback strongly attenuates the effect...
July 14, 2017: Journal of Cognitive Neuroscience
Ishita Dasgupta, Eric Schulz, Samuel J Gershman
Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? In particular, why do humans make near-rational inferences in some natural domains where the candidate hypotheses are explicitly available, whereas tasks in similar domains requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes' rule. Specifically in our account, hypotheses are generated stochastically from a sampling process, such that the sampled hypotheses form a Monte Carlo approximation of the posterior...
June 3, 2017: Cognitive Psychology
Samuel J Gershman, Marie-H Monfils, Kenneth A Norman, Yael Niv
No abstract text is available yet for this article.
May 22, 2017: ELife
Samuel J Gershman, Marie-H Monfils, Kenneth A Norman, Yael Niv
Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations...
March 15, 2017: ELife
Samuel J Gershman, Hillard Thomas Pouncy, Hyowon Gweon
We routinely observe others' choices and use them to guide our own. Whose choices influence us more, and why? Prior work has focused on the effect of perceived similarity between two individuals (self and others), such as the degree of overlap in past choices or explicitly recognizable group affiliations. In the real world, however, any dyadic relationship is part of a more complex social structure involving multiple social groups that are not directly observable. Here we suggest that human learners go beyond dyadic similarities in choice behaviors or explicit group memberships; they infer the structure of social influence by grouping individuals (including themselves) based on choices, and they use these groups to decide whose choices to follow...
March 13, 2017: Cognitive Science
Clara Kwon Starkweather, Benedicte M Babayan, Naoshige Uchida, Samuel J Gershman
Midbrain dopamine neurons signal reward prediction error (RPE), or actual minus expected reward. The temporal difference (TD) learning model has been a cornerstone in understanding how dopamine RPEs could drive associative learning. Classically, TD learning imparts value to features that serially track elapsed time relative to observable stimuli. In the real world, however, sensory stimuli provide ambiguous information about the hidden state of the environment, leading to the proposal that TD learning might instead compute a value signal based on an inferred distribution of hidden states (a 'belief state')...
April 2017: Nature Neuroscience
Mina Cikara, Samuel J Gershman
How does the brain infer social status? A new study by Kumaran et al. (2016) identifies a region of the medial prefrontal cortex that, in concert with the amygdala and hippocampus, subserves updating of probabilistic beliefs about the status of individuals in a social hierarchy.
December 7, 2016: Neuron
Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, Samuel J Gershman
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it...
November 24, 2016: Behavioral and Brain Sciences
Francisco Pereira, Samuel Gershman, Samuel Ritter, Matthew Botvinick
In this paper we carry out an extensive comparison of many off-the-shelf distributed semantic vectors representations of words, for the purpose of making predictions about behavioural results or human annotations of data. In doing this comparison we also provide a guide for how vector similarity computations can be used to make such predictions, and introduce many resources available both in terms of datasets and of vector representations. Finally, we discuss the shortcomings of this approach and future research directions that might address them...
May 2016: Cognitive Neuropsychology
Samuel J Gershman, Nathaniel D Daw
We review the psychology and neuroscience of reinforcement learning (RL), which has experienced significant progress in the past two decades, enabled by the comprehensive experimental study of simple learning and decision-making tasks. However, one challenge in the study of RL is computational: The simplicity of these tasks ignores important aspects of reinforcement learning in the real world: (a) State spaces are high-dimensional, continuous, and partially observable; this implies that (b) data are relatively sparse and, indeed, precisely the same situation may never be encountered twice; furthermore, (c) rewards depend on the long-term consequences of actions in ways that violate the classical assumptions that make RL tractable...
January 3, 2017: Annual Review of Psychology
Samuel J Gershman, Tobias Gerstenberg, Chris L Baker, Fiery A Cushman
Human success and even survival depends on our ability to predict what others will do by guessing what they are thinking. If I accelerate, will he yield? If I propose, will she accept? If I confess, will they forgive? Psychologists call this capacity "theory of mind." According to current theories, we solve this problem by assuming that others are rational actors. That is, we assume that others design and execute efficient plans to achieve their goals, given their knowledge. But if this view is correct, then our theory of mind is startlingly incomplete...
2016: PloS One
Wouter Kool, Fiery A Cushman, Samuel J Gershman
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to "model-free" and "model-based" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding...
August 2016: PLoS Computational Biology
Samuel J Gershman
The notion of "context" has played an important but complicated role in animal learning theory. Some studies have found that contextual stimuli (e.g., conditioning chamber) act much like punctate stimuli, entering into competition with other cues as would be predicted by standard associative learning theories. Other studies have found that contextual stimuli act more like "occasion setters," modulating the associative strength of punctate stimuli without themselves acquiring associative strength. Yet other studies have found that context is often largely ignored, resulting in transfer of performance across context changes...
April 2017: Psychonomic Bulletin & Review
Samuel J Gershman
The "blessing of abstraction" refers to the observation that acquiring abstract knowledge sometimes proceeds more quickly than acquiring more specific knowledge. This observation can be formalized and reproduced by hierarchical Bayesian models. The key notion is that more abstract layers of the hierarchy have a larger "effective" sample size, because they combine information across multiple specific instances lower in the hierarchy. This notion relies on specific variables being relatively concentrated around the abstract "overhypothesis"...
March 2017: Quarterly Journal of Experimental Psychology: QJEP
D Gowanlock R Tervo, Joshua B Tenenbaum, Samuel J Gershman
Despite significant advances in neuroscience, the neural bases of intelligence remain poorly understood. Arguably the most elusive aspect of intelligence is the ability to make robust inferences that go far beyond one's experience. Animals categorize objects, learn to vocalize and may even estimate causal relationships - all in the face of data that is often ambiguous and sparse. Such inductive leaps are thought to result from the brain's ability to infer latent structure that governs the environment. However, we know little about the neural computations that underlie this ability...
April 2016: Current Opinion in Neurobiology
Samuel J Gershman
Two important ideas about associative learning have emerged in recent decades: (1) Animals are Bayesian learners, tracking their uncertainty about associations; and (2) animals acquire long-term reward predictions through reinforcement learning. Both of these ideas are normative, in the sense that they are derived from rational design principles. They are also descriptive, capturing a wide range of empirical phenomena that troubled earlier theories. This article describes a unifying framework encompassing Bayesian and reinforcement learning theories of associative learning...
November 2015: PLoS Computational Biology
Samuel J Gershman, Peter I Frazier, David M Blei
Latent feature models are widely used to decompose data into a small number of components. Bayesian nonparametric variants of these models, which use the Indian buffet process (IBP) as a prior over latent features, allow the number of features to be determined from the data. We present a generalization of the IBP, the distance dependent Indian buffet process (dd-IBP), for modeling non-exchangeable data. It relies on distances defined between data points, biasing nearby data to share more features. The choice of distance measure allows for many kinds of dependencies, including temporal and spatial...
February 2015: IEEE Transactions on Pattern Analysis and Machine Intelligence
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"