keyword
https://read.qxmd.com/read/37739864/the-influence-of-multisensory-input-on-voice-perception-and-production-using-immersive-virtual-reality
#21
JOURNAL ARTICLE
Ümit Daşdöğen, Shaheen N Awan, Pasquale Bottalico, Aquiles Iglesias, Nancy Getchell, Katherine Verdolini Abbott
OBJECTIVES: The purpose was to examine the influence of auditory vs visual vs combined audiovisual input on perception and production of one's own voice, using immersive virtual reality technology. METHODS: Thirty-one vocally healthy men and women were investigated under 18 sensory input conditions, using immersive virtual reality technology. Conditions included two auditory rooms with varying reverberation times, two visual rooms with varying volumes, and the combination of audiovisual conditions...
September 20, 2023: Journal of Voice
https://read.qxmd.com/read/37727958/neural-oscillations-reflect-the-individual-differences-in-the-temporal-perception-of-audiovisual-speech
#22
JOURNAL ARTICLE
Zeliang Jiang, Xingwei An, Shuang Liu, Erwei Yin, Ye Yan, Dong Ming
Multisensory integration occurs within a limited time interval between multimodal stimuli. Multisensory temporal perception varies widely among individuals and involves perceptual synchrony and temporal sensitivity processes. Previous studies explored the neural mechanisms of individual differences for beep-flash stimuli, whereas there was no study for speech. In this study, 28 subjects (16 male) performed an audiovisual speech/ba/simultaneity judgment task while recording their electroencephalography. We examined the relationship between prestimulus neural oscillations (i...
September 19, 2023: Cerebral Cortex
https://read.qxmd.com/read/37695563/word-learning-in-deaf-adults-who-use-cochlear-implants-the-role-of-talker-variability-and-attention-to-the-mouth
#23
JOURNAL ARTICLE
Jasenia Hartman, Jenny Saffran, Ruth Litovsky
OBJECTIVES: Although cochlear implants (CIs) facilitate spoken language acquisition, many CI listeners experience difficulty learning new words. Studies have shown 29that highly variable stimulus input and audiovisual cues improve speech perception in CI listeners. However, less is known whether these two factors improve perception in a word learning context. Furthermore, few studies have examined how CI listeners direct their gaze to efficiently capture visual information available on a talker's face...
September 11, 2023: Ear and Hearing
https://read.qxmd.com/read/37673794/the-audiovisual-mismatch-negativity-in-predictive-and-non-predictive-speech-stimuli-in-older-adults-with-and-without-hearing-loss
#24
JOURNAL ARTICLE
Melissa Randazzo, Paul J Smith, Ryan Priefer, Deborah R Senzer, Karen Froud
Adults with aging-related hearing loss (ARHL) experience adaptive neural changes to optimize their sensory experiences; for example, enhanced audiovisual (AV) and predictive processing during speech perception. The mismatch negativity (MMN) event-related potential is an index of central auditory processing; however, it has not been explored as an index of AV and predictive processing in adults with ARHL. In a pilot study we examined the AV MMN in two conditions of a passive oddball paradigm - one AV condition in which the visual aspect of the stimulus can predict the auditory percept and one AV control condition in which the visual aspect of the stimulus cannot predict the auditory percept...
September 6, 2023: Multisensory Research
https://read.qxmd.com/read/37666837/improvements-in-naturalistic-speech-in-noise-comprehension-in-middle-aged-and-older-adults-after-3-weeks-of-computer-based-speechreading-training
#25
JOURNAL ARTICLE
Raffael Schmitt, Martin Meyer, Nathalie Giroud
Problems in understanding speech in noisy environments are characteristic for age-related hearing loss. Since hearing aids do not mitigate these communication problems in every case, potential alternatives in a clinical rehabilitation plan need to be explored. This study investigates whether a computer-based speechreading training improves audiovisual speech perception in noise in a sample of middle-aged and older adults (N = 62, 47-83 years) with 32 participants completing a speechreading training and 30 participants of an active control group completing a foreign language training...
September 4, 2023: NPJ Science of Learning
https://read.qxmd.com/read/37626554/investigation-of-cross-language-and-stimulus-dependent-effects-on-the-mcgurk-effect-with-finnish-and-japanese-speakers-and-listeners
#26
JOURNAL ARTICLE
Kaisa Tiippana, Yuta Ujiie, Tarja Peromaa, Kohske Takahashi
In the McGurk effect, perception of a spoken consonant is altered when an auditory (A) syllable is presented with an incongruent visual (V) syllable (e.g., A/pa/V/ka/ is often heard as /ka/ or /ta/). The McGurk effect provides a measure for visual influence on speech perception, becoming stronger the lower the proportion of auditory correct responses. Cross-language effects are studied to understand processing differences between one's own and foreign languages. Regarding the McGurk effect, it has sometimes been found to be stronger with foreign speakers...
August 13, 2023: Brain Sciences
https://read.qxmd.com/read/37626523/the-role-of-talking-faces-in-infant-language-learning-mind-the-gap-between-screen-based-settings-and-real-life-communicative-interactions
#27
REVIEW
Joan Birulés, Louise Goupil, Jérémie Josse, Mathilde Fort
Over the last few decades, developmental (psycho) linguists have demonstrated that perceiving talking faces audio-visually is important for early language acquisition. Using mostly well-controlled and screen-based laboratory approaches, this line of research has shown that paying attention to talking faces is likely to be one of the powerful strategies infants use to learn their native(s) language(s). In this review, we combine evidence from these screen-based studies with another line of research that has studied how infants learn novel words and deploy their visual attention during naturalistic play...
August 5, 2023: Brain Sciences
https://read.qxmd.com/read/37626483/age-related-changes-to-multisensory-integration-and-audiovisual-speech-perception
#28
REVIEW
Jessica L Pepper, Helen E Nuttall
Multisensory integration is essential for the quick and accurate perception of our environment, particularly in everyday tasks like speech perception. Research has highlighted the importance of investigating bottom-up and top-down contributions to multisensory integration and how these change as a function of ageing. Specifically, perceptual factors like the temporal binding window and cognitive factors like attention and inhibition appear to be fundamental in the integration of visual and auditory information-integration that may become less efficient as we age...
July 25, 2023: Brain Sciences
https://read.qxmd.com/read/37611325/effects-of-noise-and-noise-reduction-on-audiovisual-speech-perception-in-cochlear-implant-users-an-erp-study
#29
JOURNAL ARTICLE
Natalie Layer, Khaled H A Abdel-Latif, Jan-Ole Radecke, Verena Müller, Anna Weglage, Ruth Lang-Roth, Martin Walger, Pascale Sandmann
OBJECTIVE: Hearing with a cochlear implant (CI) is difficult in noisy environments, but the use of noise reduction algorithms, specifically ForwardFocus, can improve speech intelligibility. The current event-related potentials (ERP) study examined the electrophysiological correlates of this perceptual improvement. METHODS: Ten bimodal CI users performed a syllable-identification task in auditory and audiovisual conditions, with syllables presented from the front and stationary noise presented from the sides...
October 2023: Clinical Neurophysiology: Official Journal of the International Federation of Clinical Neurophysiology
https://read.qxmd.com/read/37604959/mouth-and-facial-informativeness-norms-for-2276-english-words
#30
JOURNAL ARTICLE
Anna Krason, Ye Zhang, Hillarie Man, Gabriella Vigliocco
Mouth and facial movements are part and parcel of face-to-face communication. The primary way of assessing their role in speech perception has been by manipulating their presence (e.g., by blurring the area of a speaker's lips) or by looking at how informative different mouth patterns are for the corresponding phonemes (or visemes; e.g., /b/ is visually more salient than /g/). However, moving beyond informativeness of single phonemes is challenging due to coarticulation and language variations (to name just a few factors)...
August 21, 2023: Behavior Research Methods
https://read.qxmd.com/read/37572645/auditory-cortex-and-beyond-deficits-in-congenital-amusia
#31
REVIEW
Barbara Tillmann, Jackson E Graves, Francesca Talamini, Yohana Lévêque, Lesly Fornoni, Caliani Hoarau, Agathe Pralus, Jérémie Ginzburg, Philippe Albouy, Anne Caclin
Congenital amusia is a neuro-developmental disorder of music perception and production, with the observed deficits contrasting with the sophisticated music processing reported for the general population. Musical deficits within amusia have been hypothesized to arise from altered pitch processing, with impairments in pitch discrimination and, notably, short-term memory. We here review research investigating its behavioral and neural correlates, in particular the impairments at encoding, retention, and recollection of pitch information, as well as how these impairments extend to the processing of pitch cues in speech and emotion...
July 23, 2023: Hearing Research
https://read.qxmd.com/read/37545307/metacognition-in-the-audiovisual-mcgurk-illusion-perceptual-and-causal-confidence
#32
JOURNAL ARTICLE
David Meijer, Uta Noppeney
Almost all decisions in everyday life rely on multiple sensory inputs that can come from common or independent causes. These situations invoke perceptual uncertainty about environmental properties and the signals' causal structure. Using the audiovisual McGurk illusion, this study investigated how observers formed perceptual and causal confidence judgements in information integration tasks under causal uncertainty. Observers were presented with spoken syllables, their corresponding articulatory lip movements or their congruent and McGurk combinations (e...
September 25, 2023: Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences
https://read.qxmd.com/read/37508968/the-effect-of-cued-speech-cs-perception-on-auditory-processing-in-typically-hearing-th-individuals-who-are-either-na%C3%A3-ve-or-experienced-cs-producers
#33
JOURNAL ARTICLE
Cora Jirschik Caron, Coriandre Vilain, Jean-Luc Schwartz, Clémence Bayard, Axelle Calcus, Jacqueline Leybaert, Cécile Colin
Cued Speech (CS) is a communication system that uses manual gestures to facilitate lipreading. In this study, we investigated how CS information interacts with natural speech using Event-Related Potential (ERP) analyses in French-speaking, typically hearing adults (TH) who were either naïve or experienced CS producers. The audiovisual (AV) presentation of lipreading information elicited an amplitude attenuation of the entire N1 and P2 complex in both groups, accompanied by N1 latency facilitation in the group of CS producers...
July 7, 2023: Brain Sciences
https://read.qxmd.com/read/37508944/event-related-potentials-in-assessing-visual-speech-cues-in-the-broader-autism-phenotype-evidence-from-a-phonemic-restoration-paradigm
#34
JOURNAL ARTICLE
Vanessa Harwood, Alisa Baron, Daniel Kleinman, Luca Campanelli, Julia Irwin, Nicole Landi
Audiovisual speech perception includes the simultaneous processing of auditory and visual speech. Deficits in audiovisual speech perception are reported in autistic individuals; however, less is known regarding audiovisual speech perception within the broader autism phenotype (BAP), which includes individuals with elevated, yet subclinical, levels of autistic traits. We investigate the neural indices of audiovisual speech perception in adults exhibiting a range of autism-like traits using event-related potentials (ERPs) in a phonemic restoration paradigm...
June 30, 2023: Brain Sciences
https://read.qxmd.com/read/37442310/multivariate-fmri-responses-in-superior-temporal-cortex-predict-visual-contributions-to-and-individual-differences-in-the-intelligibility-of-noisy-speech
#35
JOURNAL ARTICLE
Yue Zhang, Johannes Rennig, John F Magnotti, Michael S Beauchamp
Humans have the unique ability to decode the rapid stream of language elements that constitute speech, even when it is contaminated by noise. Two reliable observations about noisy speech perception are that seeing the face of the talker improves intelligibility and the existence of individual differences in the ability to perceive noisy speech. We introduce a multivariate BOLD fMRI measure that explains both observations. In two independent fMRI studies, clear and noisy speech was presented in visual, auditory and audiovisual formats to thirty-seven participants who rated intelligibility...
July 11, 2023: NeuroImage
https://read.qxmd.com/read/37415497/the-effect-of-sound-localization-on-auditory-only-and-audiovisual-speech-recognition-in-a-simulated-multitalker-environment
#36
JOURNAL ARTICLE
Sterling W Sheffield, Harley J Wheeler, Douglas S Brungart, Joshua G W Bernstein
Information regarding sound-source spatial location provides several speech-perception benefits, including auditory spatial cues for perceptual talker separation and localization cues to face the talker to obtain visual speech information. These benefits have typically been examined separately. A real-time processing algorithm for sound-localization degradation (LocDeg) was used to investigate how spatial-hearing benefits interact in a multitalker environment. Normal-hearing adults performed auditory-only and auditory-visual sentence recognition with target speech and maskers presented from loudspeakers at -90°, -36°, 36°, or 90° azimuths...
2023: Trends in Hearing
https://read.qxmd.com/read/37403418/children-with-developmental-dyslexia-have-equivalent-audiovisual-speech-perception-performance-but-their-perceptual-weights-differ
#37
JOURNAL ARTICLE
Liesbeth Gijbels, Adrian K C Lee, Jason D Yeatman
As reading is inherently a multisensory, audiovisual (AV) process where visual symbols (i.e., letters) are connected to speech sounds, the question has been raised whether individuals with reading difficulties, like children with developmental dyslexia (DD), have broader impairments in multisensory processing. This question has been posed before, yet it remains unanswered due to (a) the complexity and contentious etiology of DD along with (b) lack of consensus on developmentally appropriate AV processing tasks...
July 4, 2023: Developmental Science
https://read.qxmd.com/read/37390407/neural-and-behavioral-differences-in-speech-perception-for-children-with-autism-spectrum-disorders-within-an-audiovisual-context
#38
JOURNAL ARTICLE
Julia Irwin, Vanessa Harwood, Daniel Kleinman, Alisa Baron, Trey Avery, Jacqueline Turcios, Nicole Landi
PURPOSE: Reduced use of visible articulatory information on a speaker's face has been implicated as a possible contributor to language deficits in autism spectrum disorders (ASD). We employ an audiovisual (AV) phonemic restoration paradigm to measure behavioral performance (button press) and event-related potentials (ERPs) of visual speech perception in children with ASD and their neurotypical peers to assess potential neural substrates that contribute to group differences. METHOD: Two sets of speech stimuli, /ba/-"/a/" ("/a/" was created from the /ba/ token by a reducing the initial consonant) and /ba/-/pa/, were presented within an auditory oddball paradigm to children aged 6-13 years with ASD ( n = 17) and typical development (TD; n = 33) within two conditions...
June 30, 2023: Journal of Speech, Language, and Hearing Research: JSLHR
https://read.qxmd.com/read/37377212/visually-biased-perception-in-cochlear-implant-users-a-study-of-the-mcgurk-and-sound-induced-flash-illusions
#39
JOURNAL ARTICLE
Iliza M Butera, Ryan A Stevenson, René H Gifford, Mark T Wallace
The reduction in spectral resolution by cochlear implants oftentimes requires complementary visual speech cues to facilitate understanding. Despite substantial clinical characterization of auditory-only speech measures, relatively little is known about the audiovisual (AV) integrative abilities that most cochlear implant (CI) users rely on for daily speech comprehension. In this study, we tested AV integration in 63 CI users and 69 normal-hearing (NH) controls using the McGurk and sound-induced flash illusions...
2023: Trends in Hearing
https://read.qxmd.com/read/37371448/deficient-audiovisual-speech-perception-in-schizophrenia-an-erp-study
#40
JOURNAL ARTICLE
Erfan Ghaneirad, Ellyn Saenger, Gregor R Szycik, Anja Čuš, Laura Möde, Christopher Sinke, Daniel Wiswede, Stefan Bleich, Anna Borgolte
In everyday verbal communication, auditory speech perception is often disturbed by background noise. Especially in disadvantageous hearing conditions, additional visual articulatory information (e.g., lip movement) can positively contribute to speech comprehension. Patients with schizophrenia (SZs) demonstrate an aberrant ability to integrate visual and auditory sensory input during speech perception. Current findings about underlying neural mechanisms of this deficit are inconsistent. Particularly and despite the importance of early sensory processing in speech perception, very few studies have addressed these processes in SZs...
June 19, 2023: Brain Sciences
keyword
keyword
159901
2
3
Fetch more papers »
Fetching more papers... Fetching...
Remove bar
Read by QxMD icon Read
×

Save your favorite articles in one place with a free QxMD account.

×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"

We want to hear from doctors like you!

Take a second to answer a survey question.