keyword
MENU ▼
Read by QxMD icon Read
search

Listening and spoken language

keyword
https://www.readbyqxmd.com/read/29785935/the-impact-of-dysphonic-voices-on-children-s-comprehension-of-spoken-language
#1
Johnny C-H Chui, Estella P-M Ma
BACKGROUND: This study investigated the effect of teachers' dysphonic voices on children's listening comprehension. METHODS: One hundred thirty-four grade three and four students were recruited from local primary schools in Hong Kong. They were required to listen to six passages, three in Cantonese and three in English, which were either read in normal, mildly dysphonic, or severely dysphonic voices. The students were required to complete six multiple-choice comprehension questions upon listening to each passage...
May 18, 2018: Journal of Voice: Official Journal of the Voice Foundation
https://www.readbyqxmd.com/read/29778278/new-evidence-of-a-rhythmic-priming-effect-that-enhances-grammaticality-judgments-in-children
#2
Alexander Chern, Barbara Tillmann, Chloe Vaughan, Reyna L Gordon
Musical rhythm and the grammatical structure of language share a surprising number of characteristics that may be intrinsically related in child development. The current study aimed to understand the potential influence of musical rhythmic priming on subsequent spoken grammar task performance in children with typical development who were native speakers of English. Participants (ages 5-8 years) listened to rhythmically regular and irregular musical sequences (within-participants design) followed by blocks of grammatically correct and incorrect sentences upon which they were asked to perform a grammaticality judgment task...
May 16, 2018: Journal of Experimental Child Psychology
https://www.readbyqxmd.com/read/29761835/attention-to-speech-and-spoken-language-development-in-deaf-children-with-cochlear-implants-a-10-year-longitudinal-study
#3
Yuanyuan Wang, Carissa L Shafto, Derek M Houston
Early auditory/language experience plays an important role in language development. In this study, we examined the effects of severe-to-profound hearing loss and subsequent cochlear implantation on the development of attention to speech in children with cochlear implants (CIs). In addition, we investigated the extent to which attention to speech may predict spoken language development in children with CIs. We tested children with CIs and compared them to chronologically age-matched peers with normal hearing (NH) on their attention to speech at four time points post implantation; specifically, less than 1 month, 3 to 6 months, 12 months, and 18 months post implantation...
May 15, 2018: Developmental Science
https://www.readbyqxmd.com/read/29731472/the-impact-of-language-input-on-deaf-and-hard-of-hearing-preschool-children-who-use-listening-and-spoken-language
#4
Ronda Rufsvold, Ye Wang, Maria C Hartman, Sonia B Arora, Elaine R Smolen
The researchers investigated the effects of adult language input on the quantity of language, vocabulary development, and understanding of basic concepts of deaf and hard of hearing (DHH) children who used listening and spoken language. Using audio recording and Language ENvironment Analysis (LENA) software, the study involved 30 preschool DHH children who used spoken language as their communication modality and 11 typically hearing same-age peers. The children's language and the language spoken to them during all waking hours over a 2-day period (16 hours per day) were recorded and analyzed quantitatively and were compared to the children's performance on the Boehm Test of Basic Concepts and the Peabody Picture Vocabulary Test...
2018: American Annals of the Deaf
https://www.readbyqxmd.com/read/29716257/assessing-the-importance-of-several-acoustic-properties-to-the-perception-of-spontaneous-speech
#5
Ryan G Podlubny, Terrance M Nearey, Grzegorz Kondrak, Benjamin V Tucker
Spoken language manifests itself as change over time in various acoustic dimensions. While it seems clear that acoustic-phonetic information in the speech signal is key to language processing, little is currently known about which specific types of acoustic information are relatively more informative to listeners. This problem is likely compounded when considering reduced speech: Which specific acoustic information do listeners rely on when encountering spoken forms that are highly variable, and often include altered or elided segments? This work explores contributions of spectral shape, f0 contour, target duration, and time varying intensity in the perception of reduced speech...
April 2018: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/29573881/auditory-and-language-outcomes-in-children-with-unilateral-hearing-loss
#6
Elizabeth M Fitzpatrick, Isabelle Gaboury, Andrée Durieux-Smith, Doug Coyle, JoAnne Whittingham, Flora Nassrallah
OBJECTIVES: Children with unilateral hearing loss (UHL) are being diagnosed at younger ages because of newborn hearing screening. Historically, they have been considered at risk for difficulties in listening and language development. Little information is available on contemporary cohorts of children identified in the early months of life. We examined auditory and language acquisition outcomes in a contemporary cohort of early-identified children with UHL and compared their outcomes at preschool age with peers with mild bilateral loss and with normal hearing...
March 13, 2018: Hearing Research
https://www.readbyqxmd.com/read/29560782/early-l2-spoken-word-recognition-combines-input-based-and-knowledge-based-processing
#7
Seth Wiener, Kiwako Ito, Shari R Speer
This study examines the perceptual trade-off between knowledge of a language's statistical regularities and reliance on the acoustic signal during L2 spoken word recognition. We test how early learners track and make use of segmental and suprasegmental cues and their relative frequencies during non-native word recognition. English learners of Mandarin were taught an artificial tonal language in which a tone's informativeness for word identification varied according to neighborhood density. The stimuli mimicked Mandarin's uneven distribution of syllable+tone combinations by varying syllable frequency and the probability of particular tones co-occurring with a particular syllable...
March 1, 2018: Language and Speech
https://www.readbyqxmd.com/read/29435487/auditory-brainstem-responses-to-continuous-natural-speech-in-human-listeners
#8
Ross K Maddox, Adrian K C Lee
Speech is an ecologically essential signal, whose processing crucially involves the subcortical nuclei of the auditory brainstem, but there are few experimental options for studying these early responses in human listeners under natural conditions. While encoding of continuous natural speech has been successfully probed in the cortex with neurophysiological tools such as electroencephalography (EEG) and magnetoencephalography, the rapidity of subcortical response components combined with unfavorable signal-to-noise ratios signal-to-noise ratio has prevented application of those methods to the brainstem...
January 2018: ENeuro
https://www.readbyqxmd.com/read/29422530/people-can-create-iconic-vocalizations-to-communicate-various-meanings-to-na%C3%A3-ve-listeners
#9
Marcus Perlman, Gary Lupyan
The innovation of iconic gestures is essential to establishing the vocabularies of signed languages, but might iconicity also play a role in the origin of spoken words? Can people create novel vocalizations that are comprehensible to naïve listeners without prior convention? We launched a contest in which participants submitted non-linguistic vocalizations for 30 meanings spanning actions, humans, animals, inanimate objects, properties, quantifiers and demonstratives. The winner was determined by the ability of naïve listeners to infer the meanings of the vocalizations...
February 8, 2018: Scientific Reports
https://www.readbyqxmd.com/read/29421272/native-language-status-of-the-listener-modulates-the-neural-integration-of-speech-and-iconic-gestures-in-clear-and-adverse-listening-conditions
#10
Linda Drijvers, Asli Özyürek
Native listeners neurally integrate iconic gestures with speech, which can enhance degraded speech comprehension. However, it is unknown how non-native listeners neurally integrate speech and gestures, as they might process visual semantic context differently than natives. We recorded EEG while native and highly-proficient non-native listeners watched videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching ('to drive'+driving gesture) or mismatching gesture ('to drink'+mixing gesture)...
February 2018: Brain and Language
https://www.readbyqxmd.com/read/29408148/how-struggling-adult-readers-use-contextual-information-when-comprehending-speech-evidence-from-event-related-potentials
#11
Shukhan Ng, Brennan R Payne, Elizabeth A L Stine-Morrow, Kara D Federmeier
We investigated how struggling adult readers make use of sentence context to facilitate word processing when comprehending spoken language, conditions under which print decoding is not a barrier to comprehension. Stimuli were strongly and weakly constraining sentences (as measured by cloze probability), which ended with the most expected word based on those constraints or an unexpected but plausible word. Community-dwelling adults with varying literacy skills listened to continuous speech while their EEG was recorded...
March 2018: International Journal of Psychophysiology
https://www.readbyqxmd.com/read/29397190/neural-correlates-of-sine-wave-speech-intelligibility-in-human-frontal-and-temporal-cortex
#12
Sattar Khoshkhoo, Matthew K Leonard, Nima Mesgarani, Edward F Chang
Auditory speech comprehension is the result of neural computations that occur in a broad network that includes the temporal lobe auditory cortex and the left inferior frontal cortex. It remains unclear how representations in this network differentially contribute to speech comprehension. Here, we recorded high-density direct cortical activity during a sine-wave speech (SWS) listening task to examine detailed neural speech representations when the exact same acoustic input is comprehended versus not comprehended...
January 31, 2018: Brain and Language
https://www.readbyqxmd.com/read/29283604/orthographic-effects-in-second-language-spoken-word-recognition
#13
Qingqing Qu, Zhanling Cui, Markus F Damian
Evidence from both alphabetic and nonalphabetic languages has suggested the role of orthography in the processing of spoken words in individuals' native language (L1). Less evidence has existed for such effects in nonnative (L2) spoken-word processing. Whereas in L1 orthographic representations are learned only after phonological representations have long been established, in L2 the sound and spelling of words are often learned in conjunction; this might predict stronger orthographic effects in L2 than in L1 spoken processing...
December 28, 2017: Journal of Experimental Psychology. Learning, Memory, and Cognition
https://www.readbyqxmd.com/read/29278950/the-effect-of-different-speaker-accents-on-sentence-comprehension-in-children-with-speech-sound-disorder
#14
Jennifer Harte, Pauline Frizelle, Fiona Gibbon
There is substantial evidence that a speaker's accent, specifically an unfamiliar accent, can affect the listener's comprehension. In general, this effect holds true for both adults and children as well as those with typical and impaired language. Previous studies have investigated the effect of different accents on individuals with language disorders, but children with speech sound disorders (SSDs) have received little attention. The current study aims to learn more about the ability of children with SSD to process different speaker accents...
December 26, 2017: Clinical Linguistics & Phonetics
https://www.readbyqxmd.com/read/29222559/the-listening-and-spoken-language-data-repository-design-and-project-overview
#15
Tamala S Bradham, Christopher Fonnesbeck, Alice Toll, Barbara F Hecht
Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee on Infant Hearing position statement supplement. Method: The LSL-DR is a multicenter, international data repository for recording and tracking the demographics and longitudinal outcomes achieved by children who have hearing loss who are enrolled in private, specialized programs focused on supporting listening and spoken language development...
January 9, 2018: Language, Speech, and Hearing Services in Schools
https://www.readbyqxmd.com/read/29217685/scale-free-amplitude-modulation-of-neuronal-oscillations-tracks-comprehension-of-accelerated-speech
#16
Ana Filipa Teixeira Borges, Anne-Lise Giraud, Huibert D Mansvelder, Klaus Linkenkaer-Hansen
Speech comprehension is preserved up to a threefold acceleration, but deteriorates rapidly at higher speeds. Current models posit that perceptual resilience to accelerated speech is limited by the brain's ability to parse speech into syllabic units using δ/θ oscillations. Here, we investigated whether the involvement of neuronal oscillations in processing accelerated speech also relates to their scale-free amplitude modulation as indexed by the strength of long-range temporal correlations (LRTC). We recorded MEG while 24 human subjects (12 females) listened to radio news uttered at different comprehensible rates, at a mostly unintelligible rate and at this same speed interleaved with silence gaps...
January 17, 2018: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/29216813/processing-relationships-between-language-being-spoken-and-other-speech-dimensions-in-monolingual-and-bilingual-listeners
#17
Charlotte R Vaughn, Ann R Bradlow
While indexical information is implicated in many levels of language processing, little is known about the internal structure of the system of indexical dimensions, particularly in bilinguals. A series of three experiments using the speeded classification paradigm investigated the relationship between various indexical and non-linguistic dimensions of speech in processing. Namely, we compared the relationship between a lesser-studied indexical dimension relevant to bilinguals, which language is being spoken (in these experiments, either Mandarin Chinese or English), with: talker identity (Experiment 1), talker gender (Experiment 2), and amplitude of speech (Experiment 3)...
December 2017: Language and Speech
https://www.readbyqxmd.com/read/29216811/phrase-lengths-and-the-perceived-informativeness-of-prosodic-cues-in-turkish
#18
Nazik Dinçtopal Deniz, Janet Dean Fodor
It is known from previous studies that in many cases (though not all) the prosodic properties of a spoken utterance reflect aspects of its syntactic structure, and also that in many cases (though not all) listeners can benefit from these prosodic cues. A novel contribution to this literature is the Rational Speaker Hypothesis (RSH), proposed by Clifton, Carlson and Frazier. The RSH maintains that listeners are sensitive to possible reasons for why a speaker might introduce a prosodic break: "listeners treat a prosodic boundary as more informative about the syntax when it flanks short constituents than when it flanks longer constituents," because in the latter case the speaker might have been motivated solely by consideration of optimal phrase lengths...
December 2017: Language and Speech
https://www.readbyqxmd.com/read/29206841/auditory-word-recognition-of-verbs-effects-of-verb-argument-structure-on-referent-identification
#19
Mònica Sanz-Torrent, Llorenç Andreu, Javier Rodriguez Ferreiro, Marta Coll-Florit, John C Trueswell
Word recognition includes the activation of a range of syntactic and semantic knowledge that is relevant to language interpretation and reference. Here we explored whether or not the number of arguments a verb takes impinges negatively on verb processing time. In this study, three experiments compared the dynamics of spoken word recognition for verbs with different preferred argument structure. Listeners' eye movements were recorded as they searched an array of pictures in response to hearing a verb. Results were similar in all the experiments...
2017: PloS One
https://www.readbyqxmd.com/read/29201012/telling-friend-from-foe-listeners-are-unable-to-identify-in-group-and-out-group-members-from-heard-laughter
#20
Marie Ritter, Disa A Sauter
Group membership is important for how we perceive others, but although perceivers can accurately infer group membership from facial expressions and spoken language, it is not clear whether listeners can identify in- and out-group members from non-verbal vocalizations. In the current study, we examined perceivers' ability to identify group membership from non-verbal vocalizations of laughter, testing the following predictions: (1) listeners can distinguish between laughter from different nationalities and (2) between laughter from their in-group, a close out-group, and a distant out-group, and (3) greater exposure to laughter from members of other cultural groups is associated with better performance...
2017: Frontiers in Psychology
keyword
keyword
89037
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"