keyword
MENU ▼
Read by QxMD icon Read
search

Listening and spoken language

keyword
https://www.readbyqxmd.com/read/29160743/left-inferior-frontal-gyrus-sensitivity-to-phonetic-competition-in-receptive-language-processing-a-comparison-of-clear-and-conversational-speech
#1
Xin Xie, Emily Myers
The speech signal is rife with variations in phonetic ambiguity. For instance, when talkers speak in a conversational register, they demonstrate less articulatory precision, leading to greater potential for confusability at the phonetic level compared with a clear speech register. Current psycholinguistic models assume that ambiguous speech sounds activate more than one phonological category and that competition at prelexical levels cascades to lexical levels of processing. Imaging studies have shown that the left inferior frontal gyrus (LIFG) is modulated by phonetic competition between simultaneously activated categories, with increases in activation for more ambiguous tokens...
November 21, 2017: Journal of Cognitive Neuroscience
https://www.readbyqxmd.com/read/29154035/mapping-language-to-visual-referents-does-the-degree-of-image-realism-matter
#2
Raheleh Saryazdi, Craig G Chambers
Studies of real-time spoken language comprehension have shown that listeners rapidly map unfolding speech to available referents in the immediate visual environment. This has been explored using various kinds of 2-dimensional (2D) stimuli, with convenience or availability typically motivating the choice of a particular image type. However, work in other areas has suggested that certain cognitive processes are sensitive to the level of realism in 2D representations. The present study examined the process of mapping language to depictions of objects that are more or less realistic, namely photographs versus clipart images...
November 15, 2017: Acta Psychologica
https://www.readbyqxmd.com/read/29141698/disfluencies-signal-reference-to-novel-objects-for-adults-but-not-children
#3
Sarah J Owens, Justine M Thacker, Susan A Graham
Speech disfluencies can guide the ways in which listeners interpret spoken language. Here, we examined whether three-year-olds, five-year-olds, and adults use filled pauses to anticipate that a speaker is likely to refer to a novel object. Across three experiments, participants were presented with pairs of novel and familiar objects and heard a speaker refer to one of the objects using a fluent ("Look at the ball/lep!") or disfluent ("Look at thee uh ball/lep!") expression. The salience of the speaker's unfamiliarity with the novel referents, and the way in which the speaker referred to the novel referents (i...
November 16, 2017: Journal of Child Language
https://www.readbyqxmd.com/read/29114770/infant-directed-speech-enhances-attention-to-speech-in-deaf-infants-with-cochlear-implants
#4
Yuanyuan Wang, Tonya R Bergeson, Derek M Houston
Purpose: Both theoretical models of infant language acquisition and empirical studies posit important roles for attention to speech in early language development. However, deaf infants with cochlear implants (CIs) show reduced attention to speech as compared with their peers with normal hearing (NH; Horn, Davis, Pisoni, & Miyamoto, 2005; Houston, Pisoni, Kirk, Ying, & Miyamoto, 2003), which may affect their acquisition of spoken language. The main purpose of this study was to determine (a) whether infant-directed speech (IDS) enhances attention to speech in infants with CIs, as compared with adult-directed speech (ADS), and (b) whether the degree to which infants with CIs pay attention to IDS is associated with later language outcomes...
November 9, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/29098269/neural-indices-of-semantic-processing-in-early-childhood-distinguish-eventual-stuttering-persistence-and-recovery
#5
Kathryn Kreidler, Amanda Hampton Wray, Evan Usler, Christine Weber
Purpose: Maturation of neural processes for language may lag in some children who stutter (CWS), and event-related potentials (ERPs) distinguish CWS who have recovered from those who have persisted. The current study explores whether ERPs indexing semantic processing may distinguish children who will eventually persist in stuttering (CWS-ePersisted) from those who will recover from stuttering (CWS-eRecovered). Method: Fifty-six 5-year-old children with normal receptive language listened to naturally spoken sentences in a story context...
November 9, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/29094994/keep-listening-grammatical-context-reduces-but-does-not-eliminate-activation-of-unexpected-words
#6
Julia F Strand, Violet A Brown, Hunter E Brown, Jeffrey J Berg
To understand spoken language, listeners combine acoustic-phonetic input with expectations derived from context (Dahan & Magnuson, 2006). Eye-tracking studies on semantic context have demonstrated that the activation levels of competing lexical candidates depend on the relative strengths of the bottom-up input and top-down expectations (cf. Dahan & Tanenhaus, 2004). In the grammatical realm, however, graded effects of context on lexical competition have been predicted (Magnuson, Tanenhaus, & Aslin, 2008), but not demonstrated...
November 2, 2017: Journal of Experimental Psychology. Learning, Memory, and Cognition
https://www.readbyqxmd.com/read/29061700/development-of-the-visual-word-form-area-requires-visual-experience-evidence-from-blind-braille-readers
#7
Judy S Kim, Shipra Kanjlia, Lotfi B Merabet, Marina Bedny
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the fronto-temporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA...
October 23, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/29049620/diagnosing-and-screening-in-a-minority-language-a-validation-study
#8
Melanie A Zokoll, Kirsten C Wagener, Birger Kollmeier
Purpose: The Turkish Digit Triplet Test for hearing self-screening purposes and the Turkish Matrix Test (TURMatrix) for follow-up hearing diagnostics offer an automated closed-set response format where patients respond by choosing from response alternatives. Their applicability for testing Turkish-speaking patients in their native language by German audiologists with different Turkish language skills was investigated. Method: Tests were composed of spoken numbers (Turkish Digit Triplet Test) or sentences (TURMatrix)...
October 12, 2017: American Journal of Audiology
https://www.readbyqxmd.com/read/28964276/a-comparison-of-speech-intonation-production-and-perception-abilities-of-farsi-speaking-cochlear-implanted-and-normal-hearing-children
#9
COMPARATIVE STUDY
Narges Moein, Seyyedeh Maryam Khoddami, Mohammad Rahim Shahbodaghi
INTRODUCTION: Cochlear implant prosthesis facilitates spoken language development and speech comprehension in children with severe-profound hearing loss. However, this prosthesis is limited in encoding information about fundamental frequency and pitch that are essentially for recognition of speech prosody. The purpose of the present study is to investigate the perception and production of intonation in cochlear implant children and comparison with normal hearing children. METHOD: This study carried out on 25 cochlear implanted children and 50 children with normal hearing...
October 2017: International Journal of Pediatric Otorhinolaryngology
https://www.readbyqxmd.com/read/28964051/effect-of-early-dialectal-exposure-on-adult-perception-of-phonemic-vowel-length
#10
Hui Chen, Xu Rattanasone, Felicity Cox, Katherine Demuth
Attunement to native phonological categories and the specification of relevant phonological features in the lexicon occur early in development for monolingual and monodialectal speakers. However, few studies have investigated whether and how early exposure to two dialects of a language might influence the development of phonological categories, especially when a phonemic contrast exists only in one dialect. This study compared perceptual sensitivity to mispronunciations in phonemic vowel length in Australian English adult listeners with and without early exposure to another English dialect that did not have this contrast...
September 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28917133/waiting-for-lexical-access-cochlear-implants-or-severely-degraded-input-lead-listeners-to-process-speech-less-incrementally
#11
Bob McMurray, Ashley Farris-Trimble, Hannah Rigler
Spoken language unfolds over time. Consequently, there are brief periods of ambiguity, when incomplete input can match many possible words. Typical listeners solve this problem by immediately activating multiple candidates which compete for recognition. In two experiments using the visual world paradigm, we examined real-time lexical competition in prelingually deaf cochlear implant (CI) users, and normal hearing (NH) adults listening to severely degraded speech. In Experiment 1, adolescent CI users and NH controls matched spoken words to arrays of pictures including pictures of the target word and phonological competitors...
December 2017: Cognition
https://www.readbyqxmd.com/read/28863583/speech-rate-rate-matching-and-intelligibility-in-early-implanted-cochlear-implant-users
#12
Valerie Freeman, David B Pisoni
An important speech-language outcome for deaf people with cochlear implants is speech intelligibility-how well their speech is understood by others, which also affects social functioning. Beyond simply uttering recognizable words, other speech-language skills may affect communicative competence, including rate-matching or converging toward interlocutors' speech rates. This initial report examines speech rate-matching and its relations to intelligibility in 91 prelingually deaf cochlear implant users and 93 typically hearing peers age 3 to 27 years...
August 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28803218/do-you-hear-feather-when-listening-to-rain-lexical-tone-activation-during-unconscious-translation-evidence-from-mandarin-english-bilinguals
#13
Xin Wang, Juan Wang, Jeffrey G Malins
Although lexical tone is a highly prevalent phonetic cue in human languages, its role in bilingual spoken word recognition is not well understood. The present study investigates whether and how adult bilinguals, who use pitch contours to disambiguate lexical items in one language but not the other, access a tonal L1 when exclusively processing a non-tonal L2. Using the visual world paradigm, we show that Mandarin-English listeners automatically activated Mandarin translation equivalents of English target words such as 'rain' (Mandarin 'yu3'), and consequently were distracted by competitors whose segments and tones overlapped with the translations of English target words ('feather', also 'yu3' in Mandarin)...
December 2017: Cognition
https://www.readbyqxmd.com/read/28791625/language-driven-anticipatory-eye-movements-in-virtual-reality
#14
Nicole Eichert, David Peeters, Peter Hagoort
Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects...
August 8, 2017: Behavior Research Methods
https://www.readbyqxmd.com/read/28783524/input-matters-speed-of-word-recognition-in-2-year-olds-exposed-to-multiple-accents
#15
Helen Buckler, Sara Oczak-Arsic, Nazia Siddiqui, Elizabeth K Johnson
Although studies investigating language abilities in young children exposed to more than one language have become common, there is still surprisingly little research examining language development in children exposed to more than one accent. Here, we report two looking-while-listening experiments examining the impact of routine home exposure to multiple accents on 2-year-olds' word recognition abilities. In Experiment 1, we found that monolingual English-learning 24-month-olds who routinely receive exposure to both Canadian English and a non-native variant of English are less efficient in their recognition of familiar words spoken in Canadian English than monolingual English-learning 24-month-olds who hear only Canadian English at home...
August 4, 2017: Journal of Experimental Child Psychology
https://www.readbyqxmd.com/read/28782967/the-effect-of-background-noise-on-the-word-activation-process-in-nonnative-spoken-word-recognition
#16
Odette Scharenborg, Juul M J Coumans, Roeland van Hout
This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on (non-)native speech recognition? English and Dutch students participated in an English word recognition experiment, in which either a word's onset or offset was masked by noise...
August 7, 2017: Journal of Experimental Psychology. Learning, Memory, and Cognition
https://www.readbyqxmd.com/read/28727776/language-related-differences-of-the-sustained-response-evoked-by-natural-speech-sounds
#17
Christina Siu-Dschu Fan, Xingyu Zhu, Hans Günter Dosch, Christiane von Stutterheim, André Rupp
In tonal languages, such as Mandarin Chinese, the pitch contour of vowels discriminates lexical meaning, which is not the case in non-tonal languages such as German. Recent data provide evidence that pitch processing is influenced by language experience. However, there are still many open questions concerning the representation of such phonological and language-related differences at the level of the auditory cortex (AC). Using magnetoencephalography (MEG), we recorded transient and sustained auditory evoked fields (AEF) in native Chinese and German speakers to investigate language related phonological and semantic aspects in the processing of acoustic stimuli...
2017: PloS One
https://www.readbyqxmd.com/read/28701977/foreign-languages-sound-fast-evidence-from-implicit-rate-normalization
#18
Hans Rutger Bosker, Eva Reinisch
Anecdotal evidence suggests that unfamiliar languages sound faster than one's native language. Empirical evidence for this impression has, so far, come from explicit rate judgments. The aim of the present study was to test whether such perceived rate differences between native and foreign languages (FLs) have effects on implicit speech processing. Our measure of implicit rate perception was "normalization for speech rate": an ambiguous vowel between short /a/ and long /a:/ is interpreted as /a:/ following a fast but as /a/ following a slow carrier sentence...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/28685249/memory-for-conversation-and-the-development-of-common-ground
#19
Geoffrey L McKinley, Sarah Brown-Schmidt, Aaron S Benjamin
Efficient conversation is guided by the mutual knowledge, or common ground, that interlocutors form as a conversation progresses. Characterized from the perspective of commonly used measures of memory, efficient conversation should be closely associated with item memory-what was said-and context memory-who said what to whom. However, few studies have explicitly probed memory to evaluate what type of information is maintained following a communicative exchange. The current study examined how item and context memory relate to the development of common ground over the course of a conversation, and how these forms of memory vary as a function of one's role in a conversation as speaker or listener...
November 2017: Memory & Cognition
https://www.readbyqxmd.com/read/28618823/effects-of-rhythm-and-phrase-final-lengthening-on-word-spotting-in-korean
#20
Hae-Sung Jeon, Amalia Arvaniti
A word-spotting experiment was conducted to investigate whether rhythmic consistency and phrase-final lengthening facilitate performance in Korean. Listeners had to spot disyllabic and trisyllabic words in nonsense strings organized in phrases with either the same or variable syllable count; phrase-final lengthening was absent, or occurring either in all phrases or only in the phrase immediately preceding the target. The results show that, for disyllabic targets, inconsistent syllable count and lengthening before the target led to fewer errors...
June 2017: Journal of the Acoustical Society of America
keyword
keyword
89037
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"