Read by QxMD icon Read

Language, Cognition and Neuroscience

Jie Zhuang, Barry J Devereux
As spoken language unfolds over time the speech input transiently activates multiple candidates at different levels of the system - phonological, lexical, and syntactic - which in turn leads to short-lived between-candidate competition. In an fMRI study, we investigated how different kinds of linguistic competition may be modulated by the presence or absence of a prior context (Tyler 1984; Tyler et al. 2008). We found significant effects of lexico-phonological competition for isolated words, but not for words in short phrases, with high competition yielding greater activation in left inferior frontal gyrus (LIFG) and posterior temporal regions...
February 7, 2017: Language, Cognition and Neuroscience
Dominik Freunberger, Dietmar Roehm
Do people predict specific word-forms during language comprehension? In an Event-Related Potential (ERP) study participants read German sentences with predictable (The goalkeeper claims that the slick ball was easy to CATCH.) and unpredictable (The kids boasted that the young horse was easy to SADDLE.) verbs. Verbs were either consistent with the expected word-form (catch/saddle) or inconsistent and therefore led to ungrammaticality (*catches/*saddles). ERPs within the N400 time-window were modulated by predictability but not by the surface-form of the verbs, suggesting that no exact word-forms were predicted...
October 20, 2016: Language, Cognition and Neuroscience
Kevin Schluter, Stephen Politzer-Ahles, Diogo Almeida
The representational format of speech units in long-term memory is a topic of debate. We present novel event-related brain potential evidence from the Mismatch Negativity (MMN) paradigm that is compatible with abstract, non-redundant feature-based models like the Featurally Underspecified Lexicon (FUL). First, we show that the fricatives /s/ and /f/ display an asymmetric pattern of MMN responses, which is predicted if /f/ has a fully specified place of articulation ([Labial]) but /s/ does not ([Coronal], which is lexically underspecified)...
July 2, 2016: Language, Cognition and Neuroscience
Ingrid Masson-Carro, Martijn Goudbeek, Emiel Krahmer
Hand gestures are tightly coupled with speech and with action. Hence, recent accounts have emphasised the idea that simulations of spatio-motoric imagery underlie the production of co-speech gestures. In this study, we suggest that action simulations directly influence the iconic strategies used by speakers to translate aspects of their mental representations into gesture. Using a classic referential paradigm, we investigate how speakers respond gesturally to the affordances of objects, by comparing the effects of describing objects that afford action performance (such as tools) and those that do not, on gesture production...
March 15, 2016: Language, Cognition and Neuroscience
Dennis Norris, James M McQueen, Anne Cutler
Speech perception involves prediction, but how is that prediction implemented? In cognitive models prediction has often been taken to imply that there is feedback of activation from lexical to pre-lexical processes as implemented in interactive-activation models (IAMs). We show that simple activation feedback does not actually improve speech recognition. However, other forms of feedback can be beneficial. In particular, feedback can enable the listener to adapt to changing input, and can potentially help the listener to recognise unusual input, or recognise speech in the presence of competing sounds...
January 2, 2016: Language, Cognition and Neuroscience
Nazbanou Nozari, Michael Freund, Bonnie Breining, Brenda Rapp, Barry Gordon
Production of an intended word entails selection processes, in which first the lexical item and then its segments are selected among competitors, as well as processes that covertly or overtly repair dispreferred words. In two experiments, we studied the locus of the control processes involved in selection (selection control) and intercepting errors (post-monitoring control). Selection control was studied by manipulating the overlap (contextual similarity) in either semantics or in segments between two objects that participants repeatedly named...
2016: Language, Cognition and Neuroscience
Bob McMurray
No abstract text is available yet for this article.
2016: Language, Cognition and Neuroscience
David W Gow, Bruna B Olson
No abstract text is available yet for this article.
2016: Language, Cognition and Neuroscience
David W Gow, Bruna B Olson
Sentential context influences the way that listeners identify phonetically ambiguous or perceptual degraded speech sounds. Unfortunately, inherent inferential limitations on the interpretation of behavioral or BOLD imaging results make it unclear whether context influences perceptual processing directly, or acts at a post-perceptual decision stage. In this paper, we use Kalman-filter enabled Granger causation analysis of MR-constrained MEG/EEG data to distinguish between these possibilities. Using a retrospective probe verification task, we found that sentential context strongly affected the interpretation of words with ambiguous initial voicing (e...
2016: Language, Cognition and Neuroscience
Gina R Kuperberg
Since the early 2000s, several ERP studies have challenged the assumption that we always use syntactic contextual information to influence semantic processing of incoming words, as reflected by the N400 component. One approach for explaining these findings is to posit distinct semantic and syntactic processing mechanisms, each with distinct time courses. While this approach can explain specific datasets, it cannot account for the wider body of findings. I propose an alternative explanation: a dynamic generative framework in which our goal is to infer the underlying event that best explains the set of inputs encountered at any given time...
2016: Language, Cognition and Neuroscience
Alexa Bautista, Stephen M Wilson
Linguistic stimuli that are degraded in various ways have been used in neuroimaging studies to uncover distinct roles for different brain regions involved in processing language. In order to identify brain regions differentially involved in grammatical and lexical processing, we spectrally rotated specific morphemes and manipulated morpheme order to create speech stimuli that were degraded either grammatically or lexically, yet were matched in intelligibility. Twelve healthy participants were scanned with functional MRI as they listened to the grammatically and lexically degraded stimuli, interspersed with clear stimuli in the context of a familiar narrative...
2016: Language, Cognition and Neuroscience
Melinda Fricke, Melissa M Baese-Berk, Matthew Goldrick
During language production planning, multiple candidate representations are implicitly activated prior to articulation. Lexical representations that are phonologically related to the target (phonological neighbors) are known to influence phonetic properties of the target word. However, the question of which dimensions of phonological similarity contribute to such lexical-phonetic effects remains unanswered. In the present study, we reanalyze phonetic data from a previous study, examining the contrasting predictions of different definitions of phonological similarity...
2016: Language, Cognition and Neuroscience
Matthew A Johnson, Nicholas B Turk-Browne, Adele E Goldberg
In language, abstract phrasal patterns provide an important source of meaning, but little is known about whether or how such constructions are used to predict upcoming visual scenes. Findings from two fMRI studies indicate that initial exposure to a novel construction allows its semantics to be used for such predictions. Specifically, greater activity in the ventral striatum, a region sensitive to prediction errors, was linked to worse overall comprehension of a novel construction. Moreover, activity in occipital cortex was attenuated when a visual event could be inferred from a learned construction, which may reflect predictive coding of the event...
2016: Language, Cognition and Neuroscience
Esteban Buz, T Florian Jaeger
The number of phonological neighbors to a word (PND) can affect its lexical planning and pronunciation. Similar parallel effects on planning and articulation have been observed for other lexical variables, such as a word's contextual predictability. Such parallelism is frequently taken to indicate that effects on articulation are mediated by effects on the time course of lexical planning. We test this mediation assumption for PND and find it unsupported. In a picture naming experiment, we measure speech onset latencies (planning), word durations, and vowel dispersion (articulation)...
2016: Language, Cognition and Neuroscience
Ariane E Rhone, Kirill V Nourski, Hiroyuki Oya, Hiroto Kawasaki, Matthew A Howard, Bob McMurray
In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion)...
2016: Language, Cognition and Neuroscience
Evelyn Milburn, Tessa Warren, Michael Walsh Dickey
There has been considerable debate regarding the question of whether linguistic knowledge and world knowledge are separable and used differently during processing or not (Hagoort, Hald, Bastiaansen, & Petersson, 2004; Matsuki et al., 2011; Paczynski & Kuperberg, 2012; Warren & McConnell, 2007; Warren, McConnell, & Rayner, 2008). Previous investigations into this question have provided mixed evidence as to whether violations of selectional restrictions are detected earlier than violations of world knowledge...
2016: Language, Cognition and Neuroscience
Diogo Almeida, David Poeppel, David Corina
The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate...
2016: Language, Cognition and Neuroscience
Gina R Kuperberg, T Florian Jaeger
We consider several key aspects of prediction in language comprehension: its computational nature, the representational level(s) at which we predict, whether we use higher level representations to predictively pre-activate lower level representations, and whether we 'commit' in any way to our predictions, beyond pre-activation. We argue that the bulk of behavioral and neural evidence suggests that we predict probabilistically and at multiple levels and grains of representation. We also argue that we can, in principle, use higher level inferences to predictively pre-activate information at multiple lower representational levels...
2016: Language, Cognition and Neuroscience
Lori B Astheimer, Matthias Berkes, Ellen Bialystok
Attention is required during speech perception to focus processing resources on critical information. Previous research has shown that bilingualism modifies attentional processing in nonverbal domains. The current study used event-related potentials (ERPs) to determine whether bilingualism also modifies auditory attention during speech perception. We measured attention to word onsets in spoken English for monolinguals and Chinese-English bilinguals. Auditory probes were inserted at four times in a continuous narrative: concurrent with word onset, 100 ms before or after onset, and at random control times...
2016: Language, Cognition and Neuroscience
Matthew W Lowder, Fernanda Ferreira
Imagine a speaker who says "Turn left, uh I mean…" Before hearing the repair, the listener is likely to anticipate the word "right" based on the context, including the reparandum "left." Thus, even though the reparandum is not intended as part of the utterance, the listener uses it as information to predict the repair. The issue we explore in this article is how prediction operates in disfluency contexts. We begin by describing the Overlay model of disfluency comprehension, which assumes that the listener identifies a reparandum as such only after a repair is encountered which creates a local ungrammaticality...
January 2016: Language, Cognition and Neuroscience
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"