keyword
MENU ▼
Read by QxMD icon Read
search

samuel gershman

keyword
https://www.readbyqxmd.com/read/28586634/where-do-hypotheses-come-from
#1
Ishita Dasgupta, Eric Schulz, Samuel J Gershman
Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? In particular, why do humans make near-rational inferences in some natural domains where the candidate hypotheses are explicitly available, whereas tasks in similar domains requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes' rule. Specifically in our account, hypotheses are generated stochastically from a sampling process, such that the sampled hypotheses form a Monte Carlo approximation of the posterior...
June 3, 2017: Cognitive Psychology
https://www.readbyqxmd.com/read/28530550/correction-the-computational-nature-of-memory-modification
#2
Samuel J Gershman, Marie-H Monfils, Kenneth A Norman, Yael Niv
No abstract text is available yet for this article.
May 22, 2017: ELife
https://www.readbyqxmd.com/read/28294944/the-computational-nature-of-memory-modification
#3
Samuel J Gershman, Marie-H Monfils, Kenneth A Norman, Yael Niv
Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations...
March 15, 2017: ELife
https://www.readbyqxmd.com/read/28294384/learning-the-structure-of-social-influence
#4
Samuel J Gershman, Hillard Thomas Pouncy, Hyowon Gweon
We routinely observe others' choices and use them to guide our own. Whose choices influence us more, and why? Prior work has focused on the effect of perceived similarity between two individuals (self and others), such as the degree of overlap in past choices or explicitly recognizable group affiliations. In the real world, however, any dyadic relationship is part of a more complex social structure involving multiple social groups that are not directly observable. Here we suggest that human learners go beyond dyadic similarities in choice behaviors or explicit group memberships; they infer the structure of social influence by grouping individuals (including themselves) based on choices, and they use these groups to decide whose choices to follow...
March 13, 2017: Cognitive Science
https://www.readbyqxmd.com/read/28263301/dopamine-reward-prediction-errors-reflect-hidden-state-inference-across-time
#5
Clara Kwon Starkweather, Benedicte M Babayan, Naoshige Uchida, Samuel J Gershman
Midbrain dopamine neurons signal reward prediction error (RPE), or actual minus expected reward. The temporal difference (TD) learning model has been a cornerstone in understanding how dopamine RPEs could drive associative learning. Classically, TD learning imparts value to features that serially track elapsed time relative to observable stimuli. In the real world, however, sensory stimuli provide ambiguous information about the hidden state of the environment, leading to the proposal that TD learning might instead compute a value signal based on an inferred distribution of hidden states (a 'belief state')...
April 2017: Nature Neuroscience
https://www.readbyqxmd.com/read/27930908/medial-prefrontal-cortex-updates-its-status
#6
Mina Cikara, Samuel J Gershman
How does the brain infer social status? A new study by Kumaran et al. (2016) identifies a region of the medial prefrontal cortex that, in concert with the amygdala and hippocampus, subserves updating of probabilistic beliefs about the status of individuals in a social hierarchy.
December 7, 2016: Neuron
https://www.readbyqxmd.com/read/27881212/building-machines-that-learn-and-think-like-people
#7
Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, Samuel J Gershman
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it...
November 24, 2016: Behavioral and Brain Sciences
https://www.readbyqxmd.com/read/27686110/a-comparative-evaluation-of-off-the-shelf-distributed-semantic-representations-for-modelling-behavioural-data
#8
COMPARATIVE STUDY
Francisco Pereira, Samuel Gershman, Samuel Ritter, Matthew Botvinick
In this paper we carry out an extensive comparison of many off-the-shelf distributed semantic vectors representations of words, for the purpose of making predictions about behavioural results or human annotations of data. In doing this comparison we also provide a guide for how vector similarity computations can be used to make such predictions, and introduce many resources available both in terms of datasets and of vector representations. Finally, we discuss the shortcomings of this approach and future research directions that might address them...
May 2016: Cognitive Neuropsychology
https://www.readbyqxmd.com/read/27618944/reinforcement-learning-and-episodic-memory-in-humans-and-animals-an-integrative-framework
#9
REVIEW
Samuel J Gershman, Nathaniel D Daw
We review the psychology and neuroscience of reinforcement learning (RL), which has experienced significant progress in the past two decades, enabled by the comprehensive experimental study of simple learning and decision-making tasks. However, one challenge in the study of RL is computational: The simplicity of these tasks ignores important aspects of reinforcement learning in the real world: (a) State spaces are high-dimensional, continuous, and partially observable; this implies that (b) data are relatively sparse and, indeed, precisely the same situation may never be encountered twice; furthermore, (c) rewards depend on the long-term consequences of actions in ways that violate the classical assumptions that make RL tractable...
January 3, 2017: Annual Review of Psychology
https://www.readbyqxmd.com/read/27584041/plans-habits-and-theory-of-mind
#10
Samuel J Gershman, Tobias Gerstenberg, Chris L Baker, Fiery A Cushman
Human success and even survival depends on our ability to predict what others will do by guessing what they are thinking. If I accelerate, will he yield? If I propose, will she accept? If I confess, will they forgive? Psychologists call this capacity "theory of mind." According to current theories, we solve this problem by assuming that others are rational actors. That is, we assume that others design and execute efficient plans to achieve their goals, given their knowledge. But if this view is correct, then our theory of mind is startlingly incomplete...
2016: PloS One
https://www.readbyqxmd.com/read/27564094/when-does-model-based-control-pay-off
#11
Wouter Kool, Fiery A Cushman, Samuel J Gershman
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to "model-free" and "model-based" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding...
August 2016: PLoS Computational Biology
https://www.readbyqxmd.com/read/27418259/context-dependent-learning-and-causal-structure
#12
Samuel J Gershman
The notion of "context" has played an important but complicated role in animal learning theory. Some studies have found that contextual stimuli (e.g., conditioning chamber) act much like punctate stimuli, entering into competition with other cues as would be predicted by standard associative learning theories. Other studies have found that contextual stimuli act more like "occasion setters," modulating the associative strength of punctate stimuli without themselves acquiring associative strength. Yet other studies have found that context is often largely ignored, resulting in transfer of performance across context changes...
April 2017: Psychonomic Bulletin & Review
https://www.readbyqxmd.com/read/26930189/on-the-blessing-of-abstraction
#13
Samuel J Gershman
The "blessing of abstraction" refers to the observation that acquiring abstract knowledge sometimes proceeds more quickly than acquiring more specific knowledge. This observation can be formalized and reproduced by hierarchical Bayesian models. The key notion is that more abstract layers of the hierarchy have a larger "effective" sample size, because they combine information across multiple specific instances lower in the hierarchy. This notion relies on specific variables being relatively concentrated around the abstract "overhypothesis"...
March 2017: Quarterly Journal of Experimental Psychology: QJEP
https://www.readbyqxmd.com/read/26874471/toward-the-neural-implementation-of-structure-learning
#14
REVIEW
D Gowanlock R Tervo, Joshua B Tenenbaum, Samuel J Gershman
Despite significant advances in neuroscience, the neural bases of intelligence remain poorly understood. Arguably the most elusive aspect of intelligence is the ability to make robust inferences that go far beyond one's experience. Animals categorize objects, learn to vocalize and may even estimate causal relationships - all in the face of data that is often ambiguous and sparse. Such inductive leaps are thought to result from the brain's ability to infer latent structure that governs the environment. However, we know little about the neural computations that underlie this ability...
April 2016: Current Opinion in Neurobiology
https://www.readbyqxmd.com/read/26535896/a-unifying-probabilistic-view-of-associative-learning
#15
Samuel J Gershman
Two important ideas about associative learning have emerged in recent decades: (1) Animals are Bayesian learners, tracking their uncertainty about associations; and (2) animals acquire long-term reward predictions through reinforcement learning. Both of these ideas are normative, in the sense that they are derived from rational design principles. They are also descriptive, capturing a wide range of empirical phenomena that troubled earlier theories. This article describes a unifying framework encompassing Bayesian and reinforcement learning theories of associative learning...
November 2015: PLoS Computational Biology
https://www.readbyqxmd.com/read/26353245/distance-dependent-infinite-latent-feature-models
#16
Samuel J Gershman, Peter I Frazier, David M Blei
Latent feature models are widely used to decompose data into a small number of components. Bayesian nonparametric variants of these models, which use the Indian buffet process (IBP) as a prior over latent features, allow the number of features to be determined from the data. We present a generalization of the IBP, the distance dependent Indian buffet process (dd-IBP), for modeling non-exchangeable data. It relies on distances defined between data points, biasing nearby data to share more features. The choice of distance measure allows for many kinds of dependencies, including temporal and spatial...
February 2015: IEEE Transactions on Pattern Analysis and Machine Intelligence
https://www.readbyqxmd.com/read/26185246/computational-rationality-a-converging-paradigm-for-intelligence-in-brains-minds-and-machines
#17
REVIEW
Samuel J Gershman, Eric J Horvitz, Joshua B Tenenbaum
After growing up together, and mostly growing apart in the second half of the 20th century, the fields of artificial intelligence (AI), cognitive science, and neuroscience are reconverging on a shared view of the computational foundations of intelligence that promotes valuable cross-disciplinary exchanges on questions, methods, and results. We chart advances over the past several decades that address challenges of perception and action under uncertainty through the lens of computation. Advances include the development of representations and inferential procedures for large-scale probabilistic inference and machinery for enabling reflection and decisions about tradeoffs in effort, precision, and timeliness of computations...
July 17, 2015: Science
https://www.readbyqxmd.com/read/26100524/individual-differences-in-learning-predict-the-return-of-fear
#18
Samuel J Gershman, Catherine A Hartley
Using a laboratory analogue of learned fear (Pavlovian fear conditioning), we show that there is substantial heterogeneity across individuals in spontaneous recovery of fear following extinction training. We propose that this heterogeneity might stem from qualitative individual differences in the nature of extinction learning. Whereas some individuals tend to form a new memory during extinction, leaving their fear memory intact, others update the original threat association with new safety information, effectively unlearning the fear memory...
September 2015: Learning & Behavior
https://www.readbyqxmd.com/read/26019331/reinforcement-learning-in-multidimensional-environments-relies-on-attention-mechanisms
#19
RANDOMIZED CONTROLLED TRIAL
Yael Niv, Reka Daniel, Andra Geana, Samuel J Gershman, Yuan Chang Leong, Angela Radulescu, Robert C Wilson
In recent years, ideas from the computational field of reinforcement learning have revolutionized the study of learning in the brain, famously providing new, precise theories of how dopamine affects learning in the basal ganglia. However, reinforcement learning algorithms are notorious for not scaling well to multidimensional environments, as is required for real-world learning. We hypothesized that the brain naturally reduces the dimensionality of real-world problems to only those dimensions that are relevant to predicting reward, and conducted an experiment to assess by what algorithms and with what neural mechanisms this "representation learning" process is realized in humans...
May 27, 2015: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/25818905/discovering-hierarchical-motion-structure
#20
Samuel J Gershman, Joshua B Tenenbaum, Frank Jäkel
Scenes filled with moving objects are often hierarchically organized: the motion of a migrating goose is nested within the flight pattern of its flock, the motion of a car is nested within the traffic pattern of other cars on the road, the motion of body parts are nested in the motion of the body. Humans perceive hierarchical structure even in stimuli with two or three moving dots. An influential theory of hierarchical motion perception holds that the visual system performs a "vector analysis" of moving objects, decomposing them into common and relative motions...
September 2016: Vision Research
keyword
keyword
119193
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"