Read by QxMD icon Read

Listening and spoken language

Seth Wiener, Kiwako Ito, Shari R Speer
This study examines the perceptual trade-off between knowledge of a language's statistical regularities and reliance on the acoustic signal during L2 spoken word recognition. We test how early learners track and make use of segmental and suprasegmental cues and their relative frequencies during non-native word recognition. English learners of Mandarin were taught an artificial tonal language in which a tone's informativeness for word identification varied according to neighborhood density. The stimuli mimicked Mandarin's uneven distribution of syllable+tone combinations by varying syllable frequency and the probability of particular tones co-occurring with a particular syllable...
March 1, 2018: Language and Speech
Ross K Maddox, Adrian K C Lee
Speech is an ecologically essential signal, whose processing crucially involves the subcortical nuclei of the auditory brainstem, but there are few experimental options for studying these early responses in human listeners under natural conditions. While encoding of continuous natural speech has been successfully probed in the cortex with neurophysiological tools such as electroencephalography (EEG) and magnetoencephalography, the rapidity of subcortical response components combined with unfavorable signal-to-noise ratios signal-to-noise ratio has prevented application of those methods to the brainstem...
January 2018: ENeuro
Marcus Perlman, Gary Lupyan
The innovation of iconic gestures is essential to establishing the vocabularies of signed languages, but might iconicity also play a role in the origin of spoken words? Can people create novel vocalizations that are comprehensible to naïve listeners without prior convention? We launched a contest in which participants submitted non-linguistic vocalizations for 30 meanings spanning actions, humans, animals, inanimate objects, properties, quantifiers and demonstratives. The winner was determined by the ability of naïve listeners to infer the meanings of the vocalizations...
February 8, 2018: Scientific Reports
Linda Drijvers, Asli Özyürek
Native listeners neurally integrate iconic gestures with speech, which can enhance degraded speech comprehension. However, it is unknown how non-native listeners neurally integrate speech and gestures, as they might process visual semantic context differently than natives. We recorded EEG while native and highly-proficient non-native listeners watched videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching ('to drive'+driving gesture) or mismatching gesture ('to drink'+mixing gesture)...
January 29, 2018: Brain and Language
Shukhan Ng, Brennan R Payne, Elizabeth A L Stine-Morrow, Kara D Federmeier
We investigated how struggling adult readers make use of sentence context to facilitate word processing when comprehending spoken language, conditions under which print decoding is not a barrier to comprehension. Stimuli were strongly and weakly constraining sentences (as measured by cloze probability), which ended with the most expected word based on those constraints or an unexpected but plausible word. Community-dwelling adults with varying literacy skills listened to continuous speech while their EEG was recorded...
February 2, 2018: International Journal of Psychophysiology
Sattar Khoshkhoo, Matthew K Leonard, Nima Mesgarani, Edward F Chang
Auditory speech comprehension is the result of neural computations that occur in a broad network that includes the temporal lobe auditory cortex and the left inferior frontal cortex. It remains unclear how representations in this network differentially contribute to speech comprehension. Here, we recorded high-density direct cortical activity during a sine-wave speech (SWS) listening task to examine detailed neural speech representations when the exact same acoustic input is comprehended versus not comprehended...
January 31, 2018: Brain and Language
Qingqing Qu, Zhanling Cui, Markus F Damian
Evidence from both alphabetic and nonalphabetic languages has suggested the role of orthography in the processing of spoken words in individuals' native language (L1). Less evidence has existed for such effects in nonnative (L2) spoken-word processing. Whereas in L1 orthographic representations are learned only after phonological representations have long been established, in L2 the sound and spelling of words are often learned in conjunction; this might predict stronger orthographic effects in L2 than in L1 spoken processing...
December 28, 2017: Journal of Experimental Psychology. Learning, Memory, and Cognition
Jennifer Harte, Pauline Frizelle, Fiona Gibbon
There is substantial evidence that a speaker's accent, specifically an unfamiliar accent, can affect the listener's comprehension. In general, this effect holds true for both adults and children as well as those with typical and impaired language. Previous studies have investigated the effect of different accents on individuals with language disorders, but children with speech sound disorders (SSDs) have received little attention. The current study aims to learn more about the ability of children with SSD to process different speaker accents...
December 26, 2017: Clinical Linguistics & Phonetics
Tamala S Bradham, Christopher Fonnesbeck, Alice Toll, Barbara F Hecht
Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee on Infant Hearing position statement supplement. Method: The LSL-DR is a multicenter, international data repository for recording and tracking the demographics and longitudinal outcomes achieved by children who have hearing loss who are enrolled in private, specialized programs focused on supporting listening and spoken language development...
January 9, 2018: Language, Speech, and Hearing Services in Schools
Ana Filipa Teixeira Borges, Anne-Lise Giraud, Huibert D Mansvelder, Klaus Linkenkaer-Hansen
Speech comprehension is preserved up to a threefold acceleration, but deteriorates rapidly at higher speeds. Current models posit that perceptual resilience to accelerated speech is limited by the brain's ability to parse speech into syllabic units using δ/θ oscillations. Here, we investigated whether the involvement of neuronal oscillations in processing accelerated speech also relates to their scale-free amplitude modulation as indexed by the strength of long-range temporal correlations (LRTC). We recorded MEG while 24 human subjects (12 females) listened to radio news uttered at different comprehensible rates, at a mostly unintelligible rate and at this same speed interleaved with silence gaps...
January 17, 2018: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
Charlotte R Vaughn, Ann R Bradlow
While indexical information is implicated in many levels of language processing, little is known about the internal structure of the system of indexical dimensions, particularly in bilinguals. A series of three experiments using the speeded classification paradigm investigated the relationship between various indexical and non-linguistic dimensions of speech in processing. Namely, we compared the relationship between a lesser-studied indexical dimension relevant to bilinguals, which language is being spoken (in these experiments, either Mandarin Chinese or English), with: talker identity (Experiment 1), talker gender (Experiment 2), and amplitude of speech (Experiment 3)...
December 2017: Language and Speech
Nazik Dinçtopal Deniz, Janet Dean Fodor
It is known from previous studies that in many cases (though not all) the prosodic properties of a spoken utterance reflect aspects of its syntactic structure, and also that in many cases (though not all) listeners can benefit from these prosodic cues. A novel contribution to this literature is the Rational Speaker Hypothesis (RSH), proposed by Clifton, Carlson and Frazier. The RSH maintains that listeners are sensitive to possible reasons for why a speaker might introduce a prosodic break: "listeners treat a prosodic boundary as more informative about the syntax when it flanks short constituents than when it flanks longer constituents," because in the latter case the speaker might have been motivated solely by consideration of optimal phrase lengths...
December 2017: Language and Speech
Mònica Sanz-Torrent, Llorenç Andreu, Javier Rodriguez Ferreiro, Marta Coll-Florit, John C Trueswell
Word recognition includes the activation of a range of syntactic and semantic knowledge that is relevant to language interpretation and reference. Here we explored whether or not the number of arguments a verb takes impinges negatively on verb processing time. In this study, three experiments compared the dynamics of spoken word recognition for verbs with different preferred argument structure. Listeners' eye movements were recorded as they searched an array of pictures in response to hearing a verb. Results were similar in all the experiments...
2017: PloS One
Marie Ritter, Disa A Sauter
Group membership is important for how we perceive others, but although perceivers can accurately infer group membership from facial expressions and spoken language, it is not clear whether listeners can identify in- and out-group members from non-verbal vocalizations. In the current study, we examined perceivers' ability to identify group membership from non-verbal vocalizations of laughter, testing the following predictions: (1) listeners can distinguish between laughter from different nationalities and (2) between laughter from their in-group, a close out-group, and a distant out-group, and (3) greater exposure to laughter from members of other cultural groups is associated with better performance...
2017: Frontiers in Psychology
Monica Wagner, Jungmee Lee, Francesca Mingino, Colleen O'Brien, Adam Constantine, Valerie L Shafer, Mitchell Steinschneider
Auditory evoked potentials (AEP) reflect spectro-temporal feature changes within the spoken word and are sufficiently reliable to probe deficits in auditory processing. The current research assessed whether attentional modulation would alter the morphology of these AEPs and whether native-language experience with phoneme sequences would influence the effects of attention. Native-English and native-Polish adults listened to nonsense word pairs that contained the phoneme sequence onsets /st/, /sət/, /pət/ that occur in both the Polish and English languages and the phoneme sequence onset /pt/ that occurs in the Polish language, but not the English language...
2017: Frontiers in Neuroscience
Xin Xie, Emily Myers
The speech signal is rife with variations in phonetic ambiguity. For instance, when talkers speak in a conversational register, they demonstrate less articulatory precision, leading to greater potential for confusability at the phonetic level compared with a clear speech register. Current psycholinguistic models assume that ambiguous speech sounds activate more than one phonological category and that competition at prelexical levels cascades to lexical levels of processing. Imaging studies have shown that the left inferior frontal gyrus (LIFG) is modulated by phonetic competition between simultaneously activated categories, with increases in activation for more ambiguous tokens...
November 21, 2017: Journal of Cognitive Neuroscience
Raheleh Saryazdi, Craig G Chambers
Studies of real-time spoken language comprehension have shown that listeners rapidly map unfolding speech to available referents in the immediate visual environment. This has been explored using various kinds of 2-dimensional (2D) stimuli, with convenience or availability typically motivating the choice of a particular image type. However, work in other areas has suggested that certain cognitive processes are sensitive to the level of realism in 2D representations. The present study examined the process of mapping language to depictions of objects that are more or less realistic, namely photographs versus clipart images...
November 15, 2017: Acta Psychologica
Sarah J Owens, Justine M Thacker, Susan A Graham
Speech disfluencies can guide the ways in which listeners interpret spoken language. Here, we examined whether three-year-olds, five-year-olds, and adults use filled pauses to anticipate that a speaker is likely to refer to a novel object. Across three experiments, participants were presented with pairs of novel and familiar objects and heard a speaker refer to one of the objects using a fluent ("Look at the ball/lep!") or disfluent ("Look at thee uh ball/lep!") expression. The salience of the speaker's unfamiliarity with the novel referents, and the way in which the speaker referred to the novel referents (i...
November 16, 2017: Journal of Child Language
Yuanyuan Wang, Tonya R Bergeson, Derek M Houston
Purpose: Both theoretical models of infant language acquisition and empirical studies posit important roles for attention to speech in early language development. However, deaf infants with cochlear implants (CIs) show reduced attention to speech as compared with their peers with normal hearing (NH; Horn, Davis, Pisoni, & Miyamoto, 2005; Houston, Pisoni, Kirk, Ying, & Miyamoto, 2003), which may affect their acquisition of spoken language. The main purpose of this study was to determine (a) whether infant-directed speech (IDS) enhances attention to speech in infants with CIs, as compared with adult-directed speech (ADS), and (b) whether the degree to which infants with CIs pay attention to IDS is associated with later language outcomes...
November 9, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
Kathryn Kreidler, Amanda Hampton Wray, Evan Usler, Christine Weber
Purpose: Maturation of neural processes for language may lag in some children who stutter (CWS), and event-related potentials (ERPs) distinguish CWS who have recovered from those who have persisted. The current study explores whether ERPs indexing semantic processing may distinguish children who will eventually persist in stuttering (CWS-ePersisted) from those who will recover from stuttering (CWS-eRecovered). Method: Fifty-six 5-year-old children with normal receptive language listened to naturally spoken sentences in a story context...
November 9, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"