Read by QxMD icon Read

samuel gershman

Mina Cikara, Samuel J Gershman
How does the brain infer social status? A new study by Kumaran et al. (2016) identifies a region of the medial prefrontal cortex that, in concert with the amygdala and hippocampus, subserves updating of probabilistic beliefs about the status of individuals in a social hierarchy.
December 7, 2016: Neuron
Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, Samuel J Gershman
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it...
November 24, 2016: Behavioral and Brain Sciences
Francisco Pereira, Samuel Gershman, Samuel Ritter, Matthew Botvinick
In this paper we carry out an extensive comparison of many off-the-shelf distributed semantic vectors representations of words, for the purpose of making predictions about behavioural results or human annotations of data. In doing this comparison we also provide a guide for how vector similarity computations can be used to make such predictions, and introduce many resources available both in terms of datasets and of vector representations. Finally, we discuss the shortcomings of this approach and future research directions that might address them...
May 2016: Cognitive Neuropsychology
Samuel J Gershman, Nathaniel D Daw
Wereview the psychology and neuroscience of reinforcement learning (RL), which has experienced significant progress in the past two decades, enabled by the comprehensive experimental study of simple learning and decisionmaking tasks. However, one challenge in the study of RL is computational: The simplicity of these tasks ignores important aspects of reinforcement learning in the real world: (a) State spaces are high-dimensional, continuous, and partially observable; this implies that (b) data are relatively sparse and, indeed, precisely the same situation may never be encountered twice; furthermore, (c) rewards depend on the long-term consequences of actions in ways that violate the classical assumptions that make RL tractable...
September 2, 2016: Annual Review of Psychology
Samuel J Gershman, Tobias Gerstenberg, Chris L Baker, Fiery A Cushman
Human success and even survival depends on our ability to predict what others will do by guessing what they are thinking. If I accelerate, will he yield? If I propose, will she accept? If I confess, will they forgive? Psychologists call this capacity "theory of mind." According to current theories, we solve this problem by assuming that others are rational actors. That is, we assume that others design and execute efficient plans to achieve their goals, given their knowledge. But if this view is correct, then our theory of mind is startlingly incomplete...
2016: PloS One
Wouter Kool, Fiery A Cushman, Samuel J Gershman
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to "model-free" and "model-based" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding...
August 2016: PLoS Computational Biology
Samuel J Gershman
The notion of "context" has played an important but complicated role in animal learning theory. Some studies have found that contextual stimuli (e.g., conditioning chamber) act much like punctate stimuli, entering into competition with other cues as would be predicted by standard associative learning theories. Other studies have found that contextual stimuli act more like "occasion setters," modulating the associative strength of punctate stimuli without themselves acquiring associative strength. Yet other studies have found that context is often largely ignored, resulting in transfer of performance across context changes...
July 14, 2016: Psychonomic Bulletin & Review
Samuel J Gershman
The "blessing of abstraction" refers to the observation that acquiring abstract knowledge sometimes proceeds more quickly than acquiring more specific knowledge. This observation can be formalized and reproduced by hierarchical Bayesian models. The key notion is that more abstract layers of the hierarchy have a larger "effective" sample size, because they combine information across multiple specific instances lower in the hierarchy. This notion relies on specific variables being relatively concentrated around the abstract "overhypothesis"...
March 2017: Quarterly Journal of Experimental Psychology: QJEP
D Gowanlock R Tervo, Joshua B Tenenbaum, Samuel J Gershman
Despite significant advances in neuroscience, the neural bases of intelligence remain poorly understood. Arguably the most elusive aspect of intelligence is the ability to make robust inferences that go far beyond one's experience. Animals categorize objects, learn to vocalize and may even estimate causal relationships - all in the face of data that is often ambiguous and sparse. Such inductive leaps are thought to result from the brain's ability to infer latent structure that governs the environment. However, we know little about the neural computations that underlie this ability...
April 2016: Current Opinion in Neurobiology
Samuel J Gershman
Two important ideas about associative learning have emerged in recent decades: (1) Animals are Bayesian learners, tracking their uncertainty about associations; and (2) animals acquire long-term reward predictions through reinforcement learning. Both of these ideas are normative, in the sense that they are derived from rational design principles. They are also descriptive, capturing a wide range of empirical phenomena that troubled earlier theories. This article describes a unifying framework encompassing Bayesian and reinforcement learning theories of associative learning...
November 2015: PLoS Computational Biology
Samuel J Gershman, Peter I Frazier, David M Blei
Latent feature models are widely used to decompose data into a small number of components. Bayesian nonparametric variants of these models, which use the Indian buffet process (IBP) as a prior over latent features, allow the number of features to be determined from the data. We present a generalization of the IBP, the distance dependent Indian buffet process (dd-IBP), for modeling non-exchangeable data. It relies on distances defined between data points, biasing nearby data to share more features. The choice of distance measure allows for many kinds of dependencies, including temporal and spatial...
February 2015: IEEE Transactions on Pattern Analysis and Machine Intelligence
Samuel J Gershman, Eric J Horvitz, Joshua B Tenenbaum
After growing up together, and mostly growing apart in the second half of the 20th century, the fields of artificial intelligence (AI), cognitive science, and neuroscience are reconverging on a shared view of the computational foundations of intelligence that promotes valuable cross-disciplinary exchanges on questions, methods, and results. We chart advances over the past several decades that address challenges of perception and action under uncertainty through the lens of computation. Advances include the development of representations and inferential procedures for large-scale probabilistic inference and machinery for enabling reflection and decisions about tradeoffs in effort, precision, and timeliness of computations...
July 17, 2015: Science
Samuel J Gershman, Catherine A Hartley
Using a laboratory analogue of learned fear (Pavlovian fear conditioning), we show that there is substantial heterogeneity across individuals in spontaneous recovery of fear following extinction training. We propose that this heterogeneity might stem from qualitative individual differences in the nature of extinction learning. Whereas some individuals tend to form a new memory during extinction, leaving their fear memory intact, others update the original threat association with new safety information, effectively unlearning the fear memory...
September 2015: Learning & Behavior
Yael Niv, Reka Daniel, Andra Geana, Samuel J Gershman, Yuan Chang Leong, Angela Radulescu, Robert C Wilson
In recent years, ideas from the computational field of reinforcement learning have revolutionized the study of learning in the brain, famously providing new, precise theories of how dopamine affects learning in the basal ganglia. However, reinforcement learning algorithms are notorious for not scaling well to multidimensional environments, as is required for real-world learning. We hypothesized that the brain naturally reduces the dimensionality of real-world problems to only those dimensions that are relevant to predicting reward, and conducted an experiment to assess by what algorithms and with what neural mechanisms this "representation learning" process is realized in humans...
May 27, 2015: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
Samuel J Gershman, Joshua B Tenenbaum, Frank Jäkel
Scenes filled with moving objects are often hierarchically organized: the motion of a migrating goose is nested within the flight pattern of its flock, the motion of a car is nested within the traffic pattern of other cars on the road, the motion of body parts are nested in the motion of the body. Humans perceive hierarchical structure even in stimuli with two or three moving dots. An influential theory of hierarchical motion perception holds that the visual system performs a "vector analysis" of moving objects, decomposing them into common and relative motions...
September 2016: Vision Research
Samuel J Gershman, Yael Niv
In reinforcement learning (RL), a decision maker searching for the most rewarding option is often faced with the question: What is the value of an option that has never been tried before? One way to frame this question is as an inductive problem: How can I generalize my previous experience with one set of options to a novel option? We show how hierarchical Bayesian inference can be used to solve this problem, and we describe an equivalence between the Bayesian model and temporal difference learning algorithms that have been proposed as models of RL in humans and animals...
July 2015: Topics in Cognitive Science
Quentin J M Huys, Níall Lally, Paul Faulkner, Neir Eshel, Erich Seifritz, Samuel J Gershman, Peter Dayan, Jonathan P Roiser
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task...
March 10, 2015: Proceedings of the National Academy of Sciences of the United States of America
Samuel J Gershman
Studies of reinforcement learning have shown that humans learn differently in response to positive and negative reward prediction errors, a phenomenon that can be captured computationally by positing asymmetric learning rates. This asymmetry, motivated by neurobiological and cognitive considerations, has been invoked to explain learning differences across the lifespan as well as a range of psychiatric disorders. Recent theoretical work, motivated by normative considerations, has hypothesized that the learning rate asymmetry should be modulated by the distribution of rewards across the available options...
October 2015: Psychonomic Bulletin & Review
Samuel J Gershman, Angela Radulescu, Kenneth A Norman, Yael Niv
Psychophysical and neurophysiological studies have suggested that memory is not simply a carbon copy of our experience: Memories are modified or new memories are formed depending on the dynamic structure of our experience, and specifically, on how gradually or abruptly the world changes. We present a statistical theory of memory formation in a dynamic environment, based on a nonparametric generalization of the switching Kalman filter. We show that this theory can qualitatively account for several psychophysical and neural phenomena, and present results of a new visual memory experiment aimed at testing the theory directly...
November 2014: PLoS Computational Biology
Fabian A Soto, Samuel J Gershman, Yael Niv
How do we apply learning from one situation to a similar, but not identical, situation? The principles governing the extent to which animals and humans generalize what they have learned about certain stimuli to novel compounds containing those stimuli vary depending on a number of factors. Perhaps the best studied among these factors is the type of stimuli used to generate compounds. One prominent hypothesis is that different generalization principles apply depending on whether the stimuli in a compound are similar or dissimilar to each other...
July 2014: Psychological Review
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"