keyword
MENU ▼
Read by QxMD icon Read
search

Audiovisual speech perception

keyword
https://www.readbyqxmd.com/read/30303762/cross-modal-phonetic-encoding-facilitates-the-mcgurk-illusion-and-phonemic-restoration
#1
Noelle M Abbott, Antoine J Shahin
In spoken language, audiovisual (AV) perception occurs when the visual modality influences encoding of acoustic features (e.g., phonetic representations) at the auditory cortex. We examined how visual speech (lip-movements) transforms phonetic representations, indexed by changes to the N1 auditory evoked potential (AEP). EEG was acquired while human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables, /ba/ and /wa/, presented in auditory-only, AV congruent or incongruent contexts, or in a context in which the consonants were replaced by white noise (noise-replaced)...
October 10, 2018: Journal of Neurophysiology
https://www.readbyqxmd.com/read/30262826/multisensory-perception-reflects-individual-differences-in-processing-temporal-correlations
#2
Aaron R Nidiffer, Adele Diederich, Ramnarayan Ramachandran, Mark T Wallace
Sensory signals originating from a single event, such as audiovisual speech, are temporally correlated. Correlated signals are known to facilitate multisensory integration and binding. We sought to further elucidate the nature of this relationship, hypothesizing that multisensory perception will vary with the strength of audiovisual correlation. Human participants detected near-threshold amplitude modulations in auditory and/or visual stimuli. During audiovisual trials, the frequency and phase of auditory modulations were varied, producing signals with a range of correlations...
September 27, 2018: Scientific Reports
https://www.readbyqxmd.com/read/30249803/the-motor-network-reduces-multisensory-illusory-perception
#3
Takenobu Murakami, Mitsunari Abe, Winnugroho Wiratman, Juri Fujiwara, Masahiro Okamoto, Tomomi Mizuochi-Endo, Toshiki Iwabuchi, Michiru Makuuchi, Akira Yamashita, Amanda Tiksnadi, Fang-Yu Chang, Hitoshi Kubo, Nozomu Matsuda, Shunsuke Kobayashi, Satoshi Eifuku, Yoshikazu Ugawa
Observing mouth movements has strikingly effects on the perception of speech. Any mismatch between sound and mouth movements will result in listeners perceiving illusory consonants (McGurk effect), whereas matching mouth movements assist with the correct recognition of speech sounds. Recent neuroimaging studies have yielded evidence that the motor areas are involved in speech processing, yet their contributions to multisensory illusion remain unclear. Using functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) in an event-related design, we aimed to identify the functional roles of the motor network in the occurrence of multisensory illusion in female and male brains...
September 24, 2018: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/30131578/single-trial-plasticity-in-evidence-accumulation-underlies-rapid-recalibration-to-asynchronous-audiovisual-speech
#4
David M Simon, Aaron R Nidiffer, Mark T Wallace
Asynchronous arrival of audiovisual information at the peripheral sensory organs is a ubiquitous property of signals in the natural environment due to differences in the propagation time of light and sound. As these cues are constantly changing their distance from the observer, rapid adaptation to asynchronies is crucial for their appropriate integration. We investigated the neural basis of rapid recalibration to asynchronous audiovisual speech in humans using a combination of psychophysics, drift diffusion modeling, and electroencephalography (EEG)...
August 21, 2018: Scientific Reports
https://www.readbyqxmd.com/read/30043353/brief-report-differences-in-multisensory-integration-covary-with-sensory-responsiveness-in-children-with-and-without-autism-spectrum-disorder
#5
Jacob I Feldman, Wayne Kuang, Julie G Conrad, Alexander Tu, Pooja Santapuram, David M Simon, Jennifer H Foss-Feig, Leslie D Kwakye, Ryan A Stevenson, Mark T Wallace, Tiffany G Woynaroski
Research shows that children with autism spectrum disorder (ASD) differ in their behavioral patterns of responding to sensory stimuli (i.e., sensory responsiveness) and in various other aspects of sensory functioning relative to typical peers. This study explored relations between measures of sensory responsiveness and multisensory speech perception and integration in children with and without ASD. Participants were 8-17 year old children, 18 with ASD and 18 matched typically developing controls. Participants completed a psychophysical speech perception task, and parents reported on children's sensory responsiveness...
July 24, 2018: Journal of Autism and Developmental Disorders
https://www.readbyqxmd.com/read/29990508/adult-dyslexic-readers-benefit-less-from-visual-input-during-audiovisual-speech-processing-fmri-evidence
#6
Ana A Francisco, Atsuko Takashima, James M McQueen, Mark van den Bunt, Alexandra Jesse, Margriet A Groen
The aim of the present fMRI study was to investigate whether typical and dyslexic adult readers differed in the neural correlates of audiovisual speech processing. We tested for Blood Oxygen-Level Dependent (BOLD) activity differences between these two groups in a 1-back task, as they processed written (word, illegal consonant strings) and spoken (auditory, visual and audiovisual) stimuli. When processing written stimuli, dyslexic readers showed reduced activity in the supramarginal gyrus, a region suggested to play an important role in phonological processing, but only when they processed strings of consonants, not when they read words...
August 2018: Neuropsychologia
https://www.readbyqxmd.com/read/29888819/electrocorticography-reveals-continuous-auditory-and-visual-speech-tracking-in-temporal-and-occipital-cortex
#7
Cristiano Micheli, Inga M Schepers, Müge Ozker, Daniel Yoshor, Michael S Beauchamp, Jochem W Rieger
During natural speech perception, humans must parse temporally continuous auditory and visual speech signals into sequences of words. However, most studies of speech perception present only single words or syllables. We used electrocorticography (subdural electrodes implanted on the brains of epileptic patients) to investigate the neural mechanisms for processing continuous audiovisual speech signals consisting of individual sentences. Using partial correlation analysis, we found that posterior superior temporal gyrus (pSTG) and medial occipital cortex tracked both the auditory and the visual speech envelopes...
June 11, 2018: European Journal of Neuroscience
https://www.readbyqxmd.com/read/29867686/visual-speech-perception-cues-constrain-patterns-of-articulatory-variation-and-sound-change
#8
Jonathan Havenhill, Youngah Do
What are the factors that contribute to (or inhibit) diachronic sound change? While acoustically motivated sound changes are well-documented, research on the articulatory and audiovisual-perceptual aspects of sound change is limited. This paper investigates the interaction of articulatory variation and audiovisual speech perception in the Northern Cities Vowel Shift (NCVS), a pattern of sound change observed in the Great Lakes region of the United States. We focus specifically on the maintenance of the contrast between the vowels /ɑ/ and /ɔ/, both of which are fronted as a result of the NCVS...
2018: Frontiers in Psychology
https://www.readbyqxmd.com/read/29867415/audiovisual-temporal-perception-in-aging-the-role-of-multisensory-integration-and-age-related-sensory-loss
#9
REVIEW
Cassandra J Brooks, Yu Man Chan, Andrew J Anderson, Allison M McKendrick
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision...
2018: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/29862161/shifts-in-audiovisual-processing-in-healthy-aging
#10
Sarah H Baum, Ryan Stevenson
Purpose of Review: The integration of information across sensory modalities into unified percepts is a fundamental sensory process upon which a multitude of cognitive processes are based. We review the body of literature exploring aging-related changes in audiovisual integration published over the last five years. Specifically, we review the impact of changes in temporal processing, the influence of the effectiveness of sensory inputs, the role of working memory, and the newer studies of intra-individual variability during these processes...
September 2017: Current Behavioral Neuroscience Reports
https://www.readbyqxmd.com/read/29780312/combining-behavioral-and-erp-methodologies-to-investigate-the-differences-between-mcgurk-effects-demonstrated-by-cantonese-and-mandarin-speakers
#11
Juan Zhang, Yaxuan Meng, Catherine McBride, Xitao Fan, Zhen Yuan
The present study investigated the impact of Chinese dialects on McGurk effect using behavioral and event-related potential (ERP) methodologies. Specifically, intra-language comparison of McGurk effect was conducted between Mandarin and Cantonese speakers. The behavioral results showed that Cantonese speakers exhibited a stronger McGurk effect in audiovisual speech perception compared to Mandarin speakers, although both groups performed equally in the auditory and visual conditions. ERP results revealed that Cantonese speakers were more sensitive to visual cues than Mandarin speakers, though this was not the case for the auditory cues...
2018: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/29758189/audiovisual-perception-in-amblyopia-a-review-and-synthesis
#12
REVIEW
Michael D Richards, Herbert C Goltz, Agnes M F Wong
Amblyopia is a common developmental sensory disorder that has been extensively and systematically investigated as a unisensory visual impairment. However, its effects are increasingly recognized to extend beyond vision to the multisensory domain. Indeed, amblyopia is associated with altered cross-modal interactions in audiovisual temporal perception, audiovisual spatial perception, and audiovisual speech perception. Furthermore, although the visual impairment in amblyopia is typically unilateral, the multisensory abnormalities tend to persist even when viewing with both eyes...
May 17, 2018: Experimental Eye Research
https://www.readbyqxmd.com/read/29751619/language-experience-changes-audiovisual-perception
#13
Viorica Marian, Sayuri Hayakawa, Tuan Q Lam, Scott R Schroeder
Can experience change perception? Here, we examine whether language experience shapes the way individuals process auditory and visual information. We used the McGurk effect—the discovery that when people hear a speech sound (e.g., “ba”) and see a conflicting lip movement (e.g., “ga”), they recognize it as a completely new sound (e.g., “da”). This finding suggests that the brain fuses input across auditory and visual modalities demonstrates that what we hear is profoundly influenced by what we see...
May 11, 2018: Brain Sciences
https://www.readbyqxmd.com/read/29740294/converging-evidence-from-electrocorticography-and-bold-fmri-for-a-sharp-functional-boundary-in-superior-temporal-gyrus-related-to-multisensory-speech-processing
#14
Muge Ozker, Daniel Yoshor, Michael S Beauchamp
Although humans can understand speech using the auditory modality alone, in noisy environments visual speech information from the talker's mouth can rescue otherwise unintelligible auditory speech. To investigate the neural substrates of multisensory speech perception, we compared neural activity from the human superior temporal gyrus (STG) in two datasets. One dataset consisted of direct neural recordings (electrocorticography, ECoG) from surface electrodes implanted in epilepsy patients (this dataset has been previously published)...
2018: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/29705718/neural-networks-supporting-audiovisual-integration-for-speech-a-large-scale-lesion-study
#15
Gregory Hickok, Corianne Rogalsky, William Matchin, Alexandra Basilakos, Julia Cai, Sara Pillay, Michelle Ferrill, Soren Mickelsen, Steven W Anderson, Tracy Love, Jeffrey Binder, Julius Fridriksson
Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration...
June 2018: Cortex; a Journal Devoted to the Study of the Nervous System and Behavior
https://www.readbyqxmd.com/read/29657743/rapid-recalibration-of-speech-perception-after-experiencing-the-mcgurk-illusion
#16
Claudia S Lüttke, Alexis Pérez-Bellido, Floris P de Lange
The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as 'ada')...
March 2018: Royal Society Open Science
https://www.readbyqxmd.com/read/29604082/causal-inference-and-temporal-predictions-in-audiovisual-perception-of-speech-and-music
#17
REVIEW
Uta Noppeney, Hwee Ling Lee
To form a coherent percept of the environment, the brain must integrate sensory signals emanating from a common source but segregate those from different sources. Temporal regularities are prominent cues for multisensory integration, particularly for speech and music perception. In line with models of predictive coding, we suggest that the brain adapts an internal model to the statistical regularities in its environment. This internal model enables cross-sensory and sensorimotor temporal predictions as a mechanism to arbitrate between integration and segregation of signals from different senses...
March 31, 2018: Annals of the New York Academy of Sciences
https://www.readbyqxmd.com/read/29537657/multisensory-integration-of-speech-sounds-with-letters-vs-visual-speech-only-visual-speech-induces-the-mismatch-negativity
#18
Jeroen J Stekelenburg, Mirjam Keetels, Jean Vroomen
Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech...
May 2018: European Journal of Neuroscience
https://www.readbyqxmd.com/read/29536418/effects-of-stimulus-response-compatibility-on-covert-imitation-of-vowels
#19
Patti Adank, Helen Nuttall, Harold Bekkering, Gwijde Maegherman
When we observe someone else speaking, we tend to automatically activate the corresponding speech motor patterns. When listening, we therefore covertly imitate the observed speech. Simulation theories of speech perception propose that covert imitation of speech motor patterns supports speech perception. Covert imitation of speech has been studied with interference paradigms, including the stimulus-response compatibility paradigm (SRC). The SRC paradigm measures covert imitation by comparing articulation of a prompt following exposure to a distracter...
July 2018: Attention, Perception & Psychophysics
https://www.readbyqxmd.com/read/29485404/frontal-cortex-selects-representations-of-the-talker-s-mouth-to-aid-in-speech-perception
#20
Muge Ozker, Daniel Yoshor, Michael S Beauchamp
Human faces contain multiple sources of information. During speech perception, visual information from the talker's mouth is integrated with auditory information from the talker's voice. By directly recording neural responses from small populations of neurons in patients implanted with subdural electrodes, we found enhanced visual cortex responses to speech when auditory speech was absent (rendering visual speech especially relevant). Receptive field mapping demonstrated that this enhancement was specific to regions of the visual cortex with retinotopic representations of the mouth of the talker...
February 27, 2018: ELife
keyword
keyword
159901
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"