keyword
MENU ▼
Read by QxMD icon Read
search

Listening and spoken language

keyword
https://www.readbyqxmd.com/read/29283604/orthographic-effects-in-second-language-spoken-word-recognition
#1
Qingqing Qu, Zhanling Cui, Markus F Damian
Evidence from both alphabetic and nonalphabetic languages has suggested the role of orthography in the processing of spoken words in individuals' native language (L1). Less evidence has existed for such effects in nonnative (L2) spoken-word processing. Whereas in L1 orthographic representations are learned only after phonological representations have long been established, in L2 the sound and spelling of words are often learned in conjunction; this might predict stronger orthographic effects in L2 than in L1 spoken processing...
December 28, 2017: Journal of Experimental Psychology. Learning, Memory, and Cognition
https://www.readbyqxmd.com/read/29278950/the-effect-of-different-speaker-accents-on-sentence-comprehension-in-children-with-speech-sound-disorder
#2
Jennifer Harte, Pauline Frizelle, Fiona Gibbon
There is substantial evidence that a speaker's accent, specifically an unfamiliar accent, can affect the listener's comprehension. In general, this effect holds true for both adults and children as well as those with typical and impaired language. Previous studies have investigated the effect of different accents on individuals with language disorders, but children with speech sound disorders (SSDs) have received little attention. The current study aims to learn more about the ability of children with SSD to process different speaker accents...
December 26, 2017: Clinical Linguistics & Phonetics
https://www.readbyqxmd.com/read/29222559/the-listening-and-spoken-language-data-repository-design-and-project-overview
#3
Tamala S Bradham, Christopher Fonnesbeck, Alice Toll, Barbara F Hecht
Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee on Infant Hearing position statement supplement. Method: The LSL-DR is a multicenter, international data repository for recording and tracking the demographics and longitudinal outcomes achieved by children who have hearing loss who are enrolled in private, specialized programs focused on supporting listening and spoken language development...
December 8, 2017: Language, Speech, and Hearing Services in Schools
https://www.readbyqxmd.com/read/29217685/scale-free-amplitude-modulation-of-neuronal-oscillations-tracks-comprehension-of-accelerated-speech
#4
Ana Filipa Teixeira Borges, Anne-Lise Giraud, Huibert D Mansvelder, Klaus Linkenkaer-Hansen
Speech comprehension is preserved up to a three-fold acceleration but rapidly deteriorates at higher speeds. Current models posit that perceptual resilience to accelerated speech is limited by the brain's ability to parse speech into syllabic units using delta/theta oscillations. Here, we ask whether the involvement of neuronal oscillations in processing accelerated speech also relates to their scale-free amplitude modulation as indexed by the strength of long-range temporal correlations (LRTC). We recorded magnetoencephalography (MEG) while 24 human subjects (12 females) listened to radio news uttered at different comprehensible rates, at a mostly unintelligible rate, and at this same speed interleaved with silence gaps...
December 6, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/29216813/processing-relationships-between-language-being-spoken-and-other-speech-dimensions-in-monolingual-and-bilingual-listeners
#5
Charlotte R Vaughn, Ann R Bradlow
While indexical information is implicated in many levels of language processing, little is known about the internal structure of the system of indexical dimensions, particularly in bilinguals. A series of three experiments using the speeded classification paradigm investigated the relationship between various indexical and non-linguistic dimensions of speech in processing. Namely, we compared the relationship between a lesser-studied indexical dimension relevant to bilinguals, which language is being spoken (in these experiments, either Mandarin Chinese or English), with: talker identity (Experiment 1), talker gender (Experiment 2), and amplitude of speech (Experiment 3)...
December 2017: Language and Speech
https://www.readbyqxmd.com/read/29216811/phrase-lengths-and-the-perceived-informativeness-of-prosodic-cues-in-turkish
#6
Nazik Dinçtopal Deniz, Janet Dean Fodor
It is known from previous studies that in many cases (though not all) the prosodic properties of a spoken utterance reflect aspects of its syntactic structure, and also that in many cases (though not all) listeners can benefit from these prosodic cues. A novel contribution to this literature is the Rational Speaker Hypothesis (RSH), proposed by Clifton, Carlson and Frazier. The RSH maintains that listeners are sensitive to possible reasons for why a speaker might introduce a prosodic break: "listeners treat a prosodic boundary as more informative about the syntax when it flanks short constituents than when it flanks longer constituents," because in the latter case the speaker might have been motivated solely by consideration of optimal phrase lengths...
December 2017: Language and Speech
https://www.readbyqxmd.com/read/29206841/auditory-word-recognition-of-verbs-effects-of-verb-argument-structure-on-referent-identification
#7
Mònica Sanz-Torrent, Llorenç Andreu, Javier Rodriguez Ferreiro, Marta Coll-Florit, John C Trueswell
Word recognition includes the activation of a range of syntactic and semantic knowledge that is relevant to language interpretation and reference. Here we explored whether or not the number of arguments a verb takes impinges negatively on verb processing time. In this study, three experiments compared the dynamics of spoken word recognition for verbs with different preferred argument structure. Listeners' eye movements were recorded as they searched an array of pictures in response to hearing a verb. Results were similar in all the experiments...
2017: PloS One
https://www.readbyqxmd.com/read/29201012/telling-friend-from-foe-listeners-are-unable-to-identify-in-group-and-out-group-members-from-heard-laughter
#8
Marie Ritter, Disa A Sauter
Group membership is important for how we perceive others, but although perceivers can accurately infer group membership from facial expressions and spoken language, it is not clear whether listeners can identify in- and out-group members from non-verbal vocalizations. In the current study, we examined perceivers' ability to identify group membership from non-verbal vocalizations of laughter, testing the following predictions: (1) listeners can distinguish between laughter from different nationalities and (2) between laughter from their in-group, a close out-group, and a distant out-group, and (3) greater exposure to laughter from members of other cultural groups is associated with better performance...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/29162999/language-experience-with-a-native-language-phoneme-sequence-modulates-the-effects-of-attention-on-cortical-sensory-processing
#9
Monica Wagner, Jungmee Lee, Francesca Mingino, Colleen O'Brien, Adam Constantine, Valerie L Shafer, Mitchell Steinschneider
Auditory evoked potentials (AEP) reflect spectro-temporal feature changes within the spoken word and are sufficiently reliable to probe deficits in auditory processing. The current research assessed whether attentional modulation would alter the morphology of these AEPs and whether native-language experience with phoneme sequences would influence the effects of attention. Native-English and native-Polish adults listened to nonsense word pairs that contained the phoneme sequence onsets /st/, /sət/, /pət/ that occur in both the Polish and English languages and the phoneme sequence onset /pt/ that occurs in the Polish language, but not the English language...
2017: Frontiers in Neuroscience
https://www.readbyqxmd.com/read/29160743/left-inferior-frontal-gyrus-sensitivity-to-phonetic-competition-in-receptive-language-processing-a-comparison-of-clear-and-conversational-speech
#10
Xin Xie, Emily Myers
The speech signal is rife with variations in phonetic ambiguity. For instance, when talkers speak in a conversational register, they demonstrate less articulatory precision, leading to greater potential for confusability at the phonetic level compared with a clear speech register. Current psycholinguistic models assume that ambiguous speech sounds activate more than one phonological category and that competition at prelexical levels cascades to lexical levels of processing. Imaging studies have shown that the left inferior frontal gyrus (LIFG) is modulated by phonetic competition between simultaneously activated categories, with increases in activation for more ambiguous tokens...
November 21, 2017: Journal of Cognitive Neuroscience
https://www.readbyqxmd.com/read/29154035/mapping-language-to-visual-referents-does-the-degree-of-image-realism-matter
#11
Raheleh Saryazdi, Craig G Chambers
Studies of real-time spoken language comprehension have shown that listeners rapidly map unfolding speech to available referents in the immediate visual environment. This has been explored using various kinds of 2-dimensional (2D) stimuli, with convenience or availability typically motivating the choice of a particular image type. However, work in other areas has suggested that certain cognitive processes are sensitive to the level of realism in 2D representations. The present study examined the process of mapping language to depictions of objects that are more or less realistic, namely photographs versus clipart images...
November 15, 2017: Acta Psychologica
https://www.readbyqxmd.com/read/29141698/disfluencies-signal-reference-to-novel-objects-for-adults-but-not-children
#12
Sarah J Owens, Justine M Thacker, Susan A Graham
Speech disfluencies can guide the ways in which listeners interpret spoken language. Here, we examined whether three-year-olds, five-year-olds, and adults use filled pauses to anticipate that a speaker is likely to refer to a novel object. Across three experiments, participants were presented with pairs of novel and familiar objects and heard a speaker refer to one of the objects using a fluent ("Look at the ball/lep!") or disfluent ("Look at thee uh ball/lep!") expression. The salience of the speaker's unfamiliarity with the novel referents, and the way in which the speaker referred to the novel referents (i...
November 16, 2017: Journal of Child Language
https://www.readbyqxmd.com/read/29114770/infant-directed-speech-enhances-attention-to-speech-in-deaf-infants-with-cochlear-implants
#13
Yuanyuan Wang, Tonya R Bergeson, Derek M Houston
Purpose: Both theoretical models of infant language acquisition and empirical studies posit important roles for attention to speech in early language development. However, deaf infants with cochlear implants (CIs) show reduced attention to speech as compared with their peers with normal hearing (NH; Horn, Davis, Pisoni, & Miyamoto, 2005; Houston, Pisoni, Kirk, Ying, & Miyamoto, 2003), which may affect their acquisition of spoken language. The main purpose of this study was to determine (a) whether infant-directed speech (IDS) enhances attention to speech in infants with CIs, as compared with adult-directed speech (ADS), and (b) whether the degree to which infants with CIs pay attention to IDS is associated with later language outcomes...
November 9, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/29098269/neural-indices-of-semantic-processing-in-early-childhood-distinguish-eventual-stuttering-persistence-and-recovery
#14
Kathryn Kreidler, Amanda Hampton Wray, Evan Usler, Christine Weber
Purpose: Maturation of neural processes for language may lag in some children who stutter (CWS), and event-related potentials (ERPs) distinguish CWS who have recovered from those who have persisted. The current study explores whether ERPs indexing semantic processing may distinguish children who will eventually persist in stuttering (CWS-ePersisted) from those who will recover from stuttering (CWS-eRecovered). Method: Fifty-six 5-year-old children with normal receptive language listened to naturally spoken sentences in a story context...
November 9, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/29094994/keep-listening-grammatical-context-reduces-but-does-not-eliminate-activation-of-unexpected-words
#15
Julia F Strand, Violet A Brown, Hunter E Brown, Jeffrey J Berg
To understand spoken language, listeners combine acoustic-phonetic input with expectations derived from context (Dahan & Magnuson, 2006). Eye-tracking studies on semantic context have demonstrated that the activation levels of competing lexical candidates depend on the relative strengths of the bottom-up input and top-down expectations (cf. Dahan & Tanenhaus, 2004). In the grammatical realm, however, graded effects of context on lexical competition have been predicted (Magnuson, Tanenhaus, & Aslin, 2008), but not demonstrated...
November 2, 2017: Journal of Experimental Psychology. Learning, Memory, and Cognition
https://www.readbyqxmd.com/read/29061700/development-of-the-visual-word-form-area-requires-visual-experience-evidence-from-blind-braille-readers
#16
Judy S Kim, Shipra Kanjlia, Lotfi B Merabet, Marina Bedny
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the fronto-temporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA...
October 23, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/29049620/diagnosing-and-screening-in-a-minority-language-a-validation-study
#17
Melanie A Zokoll, Kirsten C Wagener, Birger Kollmeier
Purpose: The Turkish Digit Triplet Test for hearing self-screening purposes and the Turkish Matrix Test (TURMatrix) for follow-up hearing diagnostics offer an automated closed-set response format where patients respond by choosing from response alternatives. Their applicability for testing Turkish-speaking patients in their native language by German audiologists with different Turkish language skills was investigated. Method: Tests were composed of spoken numbers (Turkish Digit Triplet Test) or sentences (TURMatrix)...
October 12, 2017: American Journal of Audiology
https://www.readbyqxmd.com/read/28964276/a-comparison-of-speech-intonation-production-and-perception-abilities-of-farsi-speaking-cochlear-implanted-and-normal-hearing-children
#18
COMPARATIVE STUDY
Narges Moein, Seyyedeh Maryam Khoddami, Mohammad Rahim Shahbodaghi
INTRODUCTION: Cochlear implant prosthesis facilitates spoken language development and speech comprehension in children with severe-profound hearing loss. However, this prosthesis is limited in encoding information about fundamental frequency and pitch that are essentially for recognition of speech prosody. The purpose of the present study is to investigate the perception and production of intonation in cochlear implant children and comparison with normal hearing children. METHOD: This study carried out on 25 cochlear implanted children and 50 children with normal hearing...
October 2017: International Journal of Pediatric Otorhinolaryngology
https://www.readbyqxmd.com/read/28964051/effect-of-early-dialectal-exposure-on-adult-perception-of-phonemic-vowel-length
#19
Hui Chen, Xu Rattanasone, Felicity Cox, Katherine Demuth
Attunement to native phonological categories and the specification of relevant phonological features in the lexicon occur early in development for monolingual and monodialectal speakers. However, few studies have investigated whether and how early exposure to two dialects of a language might influence the development of phonological categories, especially when a phonemic contrast exists only in one dialect. This study compared perceptual sensitivity to mispronunciations in phonemic vowel length in Australian English adult listeners with and without early exposure to another English dialect that did not have this contrast...
September 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28917133/waiting-for-lexical-access-cochlear-implants-or-severely-degraded-input-lead-listeners-to-process-speech-less-incrementally
#20
Bob McMurray, Ashley Farris-Trimble, Hannah Rigler
Spoken language unfolds over time. Consequently, there are brief periods of ambiguity, when incomplete input can match many possible words. Typical listeners solve this problem by immediately activating multiple candidates which compete for recognition. In two experiments using the visual world paradigm, we examined real-time lexical competition in prelingually deaf cochlear implant (CI) users, and normal hearing (NH) adults listening to severely degraded speech. In Experiment 1, adolescent CI users and NH controls matched spoken words to arrays of pictures including pictures of the target word and phonological competitors...
December 2017: Cognition
keyword
keyword
89037
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"