keyword
MENU ▼
Read by QxMD icon Read
search

Listening and spoken language

keyword
https://www.readbyqxmd.com/read/28917133/waiting-for-lexical-access-cochlear-implants-or-severely-degraded-input-lead-listeners-to-process-speech-less-incrementally
#1
Bob McMurray, Ashley Farris-Trimble, Hannah Rigler
Spoken language unfolds over time. Consequently, there are brief periods of ambiguity, when incomplete input can match many possible words. Typical listeners solve this problem by immediately activating multiple candidates which compete for recognition. In two experiments using the visual world paradigm, we examined real-time lexical competition in prelingually deaf cochlear implant (CI) users, and normal hearing (NH) adults listening to severely degraded speech. In Experiment 1, adolescent CI users and NH controls matched spoken words to arrays of pictures including pictures of the target word and phonological competitors...
December 2017: Cognition
https://www.readbyqxmd.com/read/28863583/speech-rate-rate-matching-and-intelligibility-in-early-implanted-cochlear-implant-users
#2
Valerie Freeman, David B Pisoni
An important speech-language outcome for deaf people with cochlear implants is speech intelligibility-how well their speech is understood by others, which also affects social functioning. Beyond simply uttering recognizable words, other speech-language skills may affect communicative competence, including rate-matching or converging toward interlocutors' speech rates. This initial report examines speech rate-matching and its relations to intelligibility in 91 prelingually deaf cochlear implant users and 93 typically hearing peers age 3 to 27 years...
August 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28803218/do-you-hear-feather-when-listening-to-rain-lexical-tone-activation-during-unconscious-translation-evidence-from-mandarin-english-bilinguals
#3
Xin Wang, Juan Wang, Jeffrey G Malins
Although lexical tone is a highly prevalent phonetic cue in human languages, its role in bilingual spoken word recognition is not well understood. The present study investigates whether and how adult bilinguals, who use pitch contours to disambiguate lexical items in one language but not the other, access a tonal L1 when exclusively processing a non-tonal L2. Using the visual world paradigm, we show that Mandarin-English listeners automatically activated Mandarin translation equivalents of English target words such as 'rain' (Mandarin 'yu3'), and consequently were distracted by competitors whose segments and tones overlapped with the translations of English target words ('feather', also 'yu3' in Mandarin)...
December 2017: Cognition
https://www.readbyqxmd.com/read/28791625/language-driven-anticipatory-eye-movements-in-virtual-reality
#4
Nicole Eichert, David Peeters, Peter Hagoort
Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects...
August 8, 2017: Behavior Research Methods
https://www.readbyqxmd.com/read/28783524/input-matters-speed-of-word-recognition-in-2-year-olds-exposed-to-multiple-accents
#5
Helen Buckler, Sara Oczak-Arsic, Nazia Siddiqui, Elizabeth K Johnson
Although studies investigating language abilities in young children exposed to more than one language have become common, there is still surprisingly little research examining language development in children exposed to more than one accent. Here, we report two looking-while-listening experiments examining the impact of routine home exposure to multiple accents on 2-year-olds' word recognition abilities. In Experiment 1, we found that monolingual English-learning 24-month-olds who routinely receive exposure to both Canadian English and a non-native variant of English are less efficient in their recognition of familiar words spoken in Canadian English than monolingual English-learning 24-month-olds who hear only Canadian English at home...
August 4, 2017: Journal of Experimental Child Psychology
https://www.readbyqxmd.com/read/28782967/the-effect-of-background-noise-on-the-word-activation-process-in-nonnative-spoken-word-recognition
#6
Odette Scharenborg, Juul M J Coumans, Roeland van Hout
This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on (non-)native speech recognition? English and Dutch students participated in an English word recognition experiment, in which either a word's onset or offset was masked by noise...
August 7, 2017: Journal of Experimental Psychology. Learning, Memory, and Cognition
https://www.readbyqxmd.com/read/28727776/language-related-differences-of-the-sustained-response-evoked-by-natural-speech-sounds
#7
Christina Siu-Dschu Fan, Xingyu Zhu, Hans Günter Dosch, Christiane von Stutterheim, André Rupp
In tonal languages, such as Mandarin Chinese, the pitch contour of vowels discriminates lexical meaning, which is not the case in non-tonal languages such as German. Recent data provide evidence that pitch processing is influenced by language experience. However, there are still many open questions concerning the representation of such phonological and language-related differences at the level of the auditory cortex (AC). Using magnetoencephalography (MEG), we recorded transient and sustained auditory evoked fields (AEF) in native Chinese and German speakers to investigate language related phonological and semantic aspects in the processing of acoustic stimuli...
2017: PloS One
https://www.readbyqxmd.com/read/28701977/foreign-languages-sound-fast-evidence-from-implicit-rate-normalization
#8
Hans Rutger Bosker, Eva Reinisch
Anecdotal evidence suggests that unfamiliar languages sound faster than one's native language. Empirical evidence for this impression has, so far, come from explicit rate judgments. The aim of the present study was to test whether such perceived rate differences between native and foreign languages (FLs) have effects on implicit speech processing. Our measure of implicit rate perception was "normalization for speech rate": an ambiguous vowel between short /a/ and long /a:/ is interpreted as /a:/ following a fast but as /a/ following a slow carrier sentence...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/28685249/memory-for-conversation-and-the-development-of-common-ground
#9
Geoffrey L McKinley, Sarah Brown-Schmidt, Aaron S Benjamin
Efficient conversation is guided by the mutual knowledge, or common ground, that interlocutors form as a conversation progresses. Characterized from the perspective of commonly used measures of memory, efficient conversation should be closely associated with item memory-what was said-and context memory-who said what to whom. However, few studies have explicitly probed memory to evaluate what type of information is maintained following a communicative exchange. The current study examined how item and context memory relate to the development of common ground over the course of a conversation, and how these forms of memory vary as a function of one's role in a conversation as speaker or listener...
July 6, 2017: Memory & Cognition
https://www.readbyqxmd.com/read/28618823/effects-of-rhythm-and-phrase-final-lengthening-on-word-spotting-in-korean
#10
Hae-Sung Jeon, Amalia Arvaniti
A word-spotting experiment was conducted to investigate whether rhythmic consistency and phrase-final lengthening facilitate performance in Korean. Listeners had to spot disyllabic and trisyllabic words in nonsense strings organized in phrases with either the same or variable syllable count; phrase-final lengthening was absent, or occurring either in all phrases or only in the phrase immediately preceding the target. The results show that, for disyllabic targets, inconsistent syllable count and lengthening before the target led to fewer errors...
June 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28602134/the-effects-of-early-auditory-based-intervention-on-adult-bilateral-cochlear-implant-outcomes
#11
Stacey R Lim
OBJECTIVES: The goal of this exploratory study was to determine the types of improvement that sequentially implanted auditory-verbal and auditory-oral adults with prelingual and childhood hearing loss received in bilateral listening conditions, compared to their best unilateral listening condition. METHODS: Five auditory-verbal adults and five auditory-oral adults were recruited for this study. Participants were seated in the center of a 6-loudspeaker array. BKB-SIN sentences were presented from 0° azimuth, while multi-talker babble was presented from various loudspeakers...
June 12, 2017: Cochlear Implants International
https://www.readbyqxmd.com/read/28559320/visual-cortex-entrains-to-sign-language
#12
Geoffrey Brookshire, Jenny Lu, Howard C Nusbaum, Susan Goldin-Meadow, Daniel Casasanto
Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality...
June 13, 2017: Proceedings of the National Academy of Sciences of the United States of America
https://www.readbyqxmd.com/read/28554086/exposure-to-multiple-accents-supports-infants-understanding-of-novel-accents
#13
Christine E Potter, Jenny R Saffran
Accented speech poses a challenge for listeners, particularly those with limited knowledge of their language. In a series of studies, we explored the possibility that experience with variability, specifically the variability provided by multiple accents, would facilitate infants' comprehension of speech produced with an unfamiliar accent. 15- and 18-month-old American-English learning infants were exposed to brief passages of multi-talker speech and subsequently tested on their ability to distinguish between real, familiar words and nonsense words, produced in either their native accent or an unfamiliar (British) accent...
September 2017: Cognition
https://www.readbyqxmd.com/read/28536551/dissociating-effects-of-scrambling-and-topicalization-within-the-left-frontal-and-temporal-language-areas-an-fmri-study-in-kaqchikel-maya
#14
Shinri Ohta, Masatoshi Koizumi, Kuniyoshi L Sakai
Some natural languages grammatically allow different types of changing word orders, such as object scrambling and topicalization. Scrambling and topicalization are more related to syntax and semantics/phonology, respectively. Here we hypothesized that scrambling should activate the left frontal regions, while topicalization would affect the bilateral temporal regions. To examine such distinct effects in our functional magnetic resonance imaging study, we targeted the Kaqchikel Maya language, a Mayan language spoken in Guatemala...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/28525641/erp-correlates-of-motivating-voices-quality-of-motivation-and-time-course-matters
#15
Konstantina Zougkou, Netta Weinstein, Silke Paulmann
Here, we conducted the first study to explore how motivations expressed through speech are processed in real-time. Participants listened to sentences spoken in two types of well-studied motivational tones (autonomy-supportive and controlling), or a neutral tone of voice. To examine this, listeners were presented with sentences that either signaled motivations through prosody (tone of voice) and words simultaneously (e.g., "You absolutely have to do it my way" spoken in a controlling tone of voice), or lacked motivationally biasing words (e...
May 19, 2017: Social Cognitive and Affective Neuroscience
https://www.readbyqxmd.com/read/28496393/subtlety-of-ambient-language-effects-in-babbling-a-study-of-english-and-chinese-learning-infants-at-8-10-and-12-months
#16
Chia-Cheng Lee, Yuna Jhang, Li-Mei Chen, George Relyea, D Kimbrough Oller
Prior research on ambient-language effects in babbling has often suggested infants produce language-specific phonological features within the first year. These results have been questioned in research failing to find such effects and challenging the positive findings on methodological grounds. We studied English- and Chinese-learning infants at 8, 10, and 12 months and found listeners could not detect ambient-language effects in the vast majority of infant utterances, but only in items deemed to be words or to contain canonical syllables that may have made them sound like words with language-specific shapes...
2017: Language Learning and Development
https://www.readbyqxmd.com/read/28462503/influences-of-speech-familiarity-on-immediate-perception-and-final-comprehension
#17
Lynn K Perry, Emily N Mech, Maryellen C MacDonald, Mark S Seidenberg
Unfamiliar speech-spoken in a familiar language but with an accent different from the listener's-is known to increase comprehension difficulty. However, there is evidence of listeners' rapid adaptation to unfamiliar accents (although perhaps not to the level of familiar accents). This paradox might emerge from prior focus on isolated word perception and/or use of single comprehension measures. We investigated processing of fluent connected speech spoken either in a familiar or unfamiliar accent, using participants' ability to "shadow" the speech as an immediate measure as well as a comprehension test at passage end...
May 1, 2017: Psychonomic Bulletin & Review
https://www.readbyqxmd.com/read/28418532/auditory-environment-across-the-life-span-of-cochlear-implant-users-insights-from-data-logging
#18
Tobias Busch, Filiep Vanpoucke, Astrid van Wieringen
Purpose: We describe the natural auditory environment of people with cochlear implants (CIs), how it changes across the life span, and how it varies between individuals. Method: We performed a retrospective cross-sectional analysis of Cochlear Nucleus 6 CI sound-processor data logs. The logs were obtained from 1,501 people with CIs (ages 0-96 years). They covered over 2.4 million hr of implant use and indicated how much time the CI users had spent in various acoustical environments...
May 24, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/28406683/evaluating-the-sources-and-functions-of-gradiency-in-phoneme-categorization-an-individual-differences-approach
#19
Efthymia C Kapnoula, Matthew B Winn, Eun Jong Kong, Jan Edwards, Bob McMurray
During spoken language comprehension listeners transform continuous acoustic cues into categories (e.g., /b/ and /p/). While long-standing research suggests that phonetic categories are activated in a gradient way, there are also clear individual differences in that more gradient categorization has been linked to various communication impairments such as dyslexia and specific language impairments (Joanisse, Manis, Keating, & Seidenberg, 2000; López-Zamora, Luque, Álvarez, & Cobos, 2012; Serniclaes, Van Heghe, Mousty, Carré, & Sprenger-Charolles, 2004; Werker & Tees, 1987)...
September 2017: Journal of Experimental Psychology. Human Perception and Performance
https://www.readbyqxmd.com/read/28365876/perception-of-native-english-reduced-forms-in-adverse-environments-by-chinese-undergraduate-students
#20
Simpson W L Wong, Jenny K Y Tsui, Bonnie Wing-Yin Chow, Vina W H Leung, Peggy Mok, Kevin Kien-Hoa Chung
Previous research has shown that learners of English-as-a-second-language (ESL) have difficulties in understanding connected speech spoken by native English speakers. Extending from past research limited to quiet listening condition, this study examined the perception of English connected speech presented under five adverse conditions, namely multi-talker babble noise, speech-shaped noise, factory noise, whispering and sad emotional tones. We tested a total of 64 Chinese ESL undergraduate students, using a battery of listening tasks...
April 1, 2017: Journal of Psycholinguistic Research
keyword
keyword
89037
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"