keyword
MENU ▼
Read by QxMD icon Read
search

Audiovisual speech perception

keyword
https://www.readbyqxmd.com/read/30043353/brief-report-differences-in-multisensory-integration-covary-with-sensory-responsiveness-in-children-with-and-without-autism-spectrum-disorder
#1
Jacob I Feldman, Wayne Kuang, Julie G Conrad, Alexander Tu, Pooja Santapuram, David M Simon, Jennifer H Foss-Feig, Leslie D Kwakye, Ryan A Stevenson, Mark T Wallace, Tiffany G Woynaroski
Research shows that children with autism spectrum disorder (ASD) differ in their behavioral patterns of responding to sensory stimuli (i.e., sensory responsiveness) and in various other aspects of sensory functioning relative to typical peers. This study explored relations between measures of sensory responsiveness and multisensory speech perception and integration in children with and without ASD. Participants were 8-17 year old children, 18 with ASD and 18 matched typically developing controls. Participants completed a psychophysical speech perception task, and parents reported on children's sensory responsiveness...
July 24, 2018: Journal of Autism and Developmental Disorders
https://www.readbyqxmd.com/read/29990508/adult-dyslexic-readers-benefit-less-from-visual-input-during-audiovisual-speech-processing-fmri-evidence
#2
Ana A Francisco, Atsuko Takashima, James M McQueen, Mark van den Bunt, Alexandra Jesse, Margriet A Groen
The aim of the present fMRI study was to investigate whether typical and dyslexic adult readers differed in the neural correlates of audiovisual speech processing. We tested for Blood Oxygen-Level Dependent (BOLD) activity differences between these two groups in a 1-back task, as they processed written (word, illegal consonant strings) and spoken (auditory, visual and audiovisual) stimuli. When processing written stimuli, dyslexic readers showed reduced activity in the supramarginal gyrus, a region suggested to play an important role in phonological processing, but only when they processed strings of consonants, not when they read words...
July 7, 2018: Neuropsychologia
https://www.readbyqxmd.com/read/29888819/electrocorticography-reveals-continuous-auditory-and-visual-speech-tracking-in-temporal-and-occipital-cortex
#3
C Micheli, I M Schepers, M Ozker, D Yoshor, M S Beauchamp, J W Rieger
During natural speech perception, humans must parse temporally continuous auditory and visual speech signals into sequences of words. However, most studies of speech perception present only single words or syllables. We used electrocorticography (subdural electrodes implanted on the brains of epileptic patients) to investigate the neural mechanisms for processing continuous audiovisual speech signals consisting of individual sentences. Using partial correlation analysis, we found that posterior superior temporal gyrus (pSTG) and medial occipital cortex tracked both the auditory and visual speech envelopes...
June 11, 2018: European Journal of Neuroscience
https://www.readbyqxmd.com/read/29867686/visual-speech-perception-cues-constrain-patterns-of-articulatory-variation-and-sound-change
#4
Jonathan Havenhill, Youngah Do
What are the factors that contribute to (or inhibit) diachronic sound change? While acoustically motivated sound changes are well-documented, research on the articulatory and audiovisual-perceptual aspects of sound change is limited. This paper investigates the interaction of articulatory variation and audiovisual speech perception in the Northern Cities Vowel Shift (NCVS), a pattern of sound change observed in the Great Lakes region of the United States. We focus specifically on the maintenance of the contrast between the vowels /ɑ/ and /ɔ/, both of which are fronted as a result of the NCVS...
2018: Frontiers in Psychology
https://www.readbyqxmd.com/read/29867415/audiovisual-temporal-perception-in-aging-the-role-of-multisensory-integration-and-age-related-sensory-loss
#5
REVIEW
Cassandra J Brooks, Yu Man Chan, Andrew J Anderson, Allison M McKendrick
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision...
2018: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/29862161/shifts-in-audiovisual-processing-in-healthy-aging
#6
Sarah H Baum, Ryan Stevenson
Purpose of Review: The integration of information across sensory modalities into unified percepts is a fundamental sensory process upon which a multitude of cognitive processes are based. We review the body of literature exploring aging-related changes in audiovisual integration published over the last five years. Specifically, we review the impact of changes in temporal processing, the influence of the effectiveness of sensory inputs, the role of working memory, and the newer studies of intra-individual variability during these processes...
September 2017: Current Behavioral Neuroscience Reports
https://www.readbyqxmd.com/read/29780312/combining-behavioral-and-erp-methodologies-to-investigate-the-differences-between-mcgurk-effects-demonstrated-by-cantonese-and-mandarin-speakers
#7
Juan Zhang, Yaxuan Meng, Catherine McBride, Xitao Fan, Zhen Yuan
The present study investigated the impact of Chinese dialects on McGurk effect using behavioral and event-related potential (ERP) methodologies. Specifically, intra-language comparison of McGurk effect was conducted between Mandarin and Cantonese speakers. The behavioral results showed that Cantonese speakers exhibited a stronger McGurk effect in audiovisual speech perception compared to Mandarin speakers, although both groups performed equally in the auditory and visual conditions. ERP results revealed that Cantonese speakers were more sensitive to visual cues than Mandarin speakers, though this was not the case for the auditory cues...
2018: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/29758189/audiovisual-perception-in-amblyopia-a-review-and-synthesis
#8
REVIEW
Michael D Richards, Herbert C Goltz, Agnes M F Wong
Amblyopia is a common developmental sensory disorder that has been extensively and systematically investigated as a unisensory visual impairment. However, its effects are increasingly recognized to extend beyond vision to the multisensory domain. Indeed, amblyopia is associated with altered cross-modal interactions in audiovisual temporal perception, audiovisual spatial perception, and audiovisual speech perception. Furthermore, although the visual impairment in amblyopia is typically unilateral, the multisensory abnormalities tend to persist even when viewing with both eyes...
May 17, 2018: Experimental Eye Research
https://www.readbyqxmd.com/read/29751619/language-experience-changes-audiovisual-perception
#9
Viorica Marian, Sayuri Hayakawa, Tuan Q Lam, Scott R Schroeder
Can experience change perception? Here, we examine whether language experience shapes the way individuals process auditory and visual information. We used the McGurk effect—the discovery that when people hear a speech sound (e.g., “ba”) and see a conflicting lip movement (e.g., “ga”), they recognize it as a completely new sound (e.g., “da”). This finding suggests that the brain fuses input across auditory and visual modalities demonstrates that what we hear is profoundly influenced by what we see...
May 11, 2018: Brain Sciences
https://www.readbyqxmd.com/read/29740294/converging-evidence-from-electrocorticography-and-bold-fmri-for-a-sharp-functional-boundary-in-superior-temporal-gyrus-related-to-multisensory-speech-processing
#10
Muge Ozker, Daniel Yoshor, Michael S Beauchamp
Although humans can understand speech using the auditory modality alone, in noisy environments visual speech information from the talker's mouth can rescue otherwise unintelligible auditory speech. To investigate the neural substrates of multisensory speech perception, we compared neural activity from the human superior temporal gyrus (STG) in two datasets. One dataset consisted of direct neural recordings (electrocorticography, ECoG) from surface electrodes implanted in epilepsy patients (this dataset has been previously published)...
2018: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/29705718/neural-networks-supporting-audiovisual-integration-for-speech-a-large-scale-lesion-study
#11
Gregory Hickok, Corianne Rogalsky, William Matchin, Alexandra Basilakos, Julia Cai, Sara Pillay, Michelle Ferrill, Soren Mickelsen, Steven W Anderson, Tracy Love, Jeffrey Binder, Julius Fridriksson
Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration...
June 2018: Cortex; a Journal Devoted to the Study of the Nervous System and Behavior
https://www.readbyqxmd.com/read/29657743/rapid-recalibration-of-speech-perception-after-experiencing-the-mcgurk-illusion
#12
Claudia S Lüttke, Alexis Pérez-Bellido, Floris P de Lange
The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as 'ada')...
March 2018: Royal Society Open Science
https://www.readbyqxmd.com/read/29604082/causal-inference-and-temporal-predictions-in-audiovisual-perception-of-speech-and-music
#13
REVIEW
Uta Noppeney, Hwee Ling Lee
To form a coherent percept of the environment, the brain must integrate sensory signals emanating from a common source but segregate those from different sources. Temporal regularities are prominent cues for multisensory integration, particularly for speech and music perception. In line with models of predictive coding, we suggest that the brain adapts an internal model to the statistical regularities in its environment. This internal model enables cross-sensory and sensorimotor temporal predictions as a mechanism to arbitrate between integration and segregation of signals from different senses...
March 31, 2018: Annals of the New York Academy of Sciences
https://www.readbyqxmd.com/read/29537657/multisensory-integration-of-speech-sounds-with-letters-vs-visual-speech-only-visual-speech-induces-the-mismatch-negativity
#14
Jeroen J Stekelenburg, Mirjam Keetels, Jean Vroomen
Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech...
May 2018: European Journal of Neuroscience
https://www.readbyqxmd.com/read/29536418/effects-of-stimulus-response-compatibility-on-covert-imitation-of-vowels
#15
Patti Adank, Helen Nuttall, Harold Bekkering, Gwijde Maegherman
When we observe someone else speaking, we tend to automatically activate the corresponding speech motor patterns. When listening, we therefore covertly imitate the observed speech. Simulation theories of speech perception propose that covert imitation of speech motor patterns supports speech perception. Covert imitation of speech has been studied with interference paradigms, including the stimulus-response compatibility paradigm (SRC). The SRC paradigm measures covert imitation by comparing articulation of a prompt following exposure to a distracter...
July 2018: Attention, Perception & Psychophysics
https://www.readbyqxmd.com/read/29485404/frontal-cortex-selects-representations-of-the-talker-s-mouth-to-aid-in-speech-perception
#16
Muge Ozker, Daniel Yoshor, Michael S Beauchamp
Human faces contain multiple sources of information. During speech perception, visual information from the talker's mouth is integrated with auditory information from the talker's voice. By directly recording neural responses from small populations of neurons in patients implanted with subdural electrodes, we found enhanced visual cortex responses to speech when auditory speech was absent (rendering visual speech especially relevant). Receptive field mapping demonstrated that this enhancement was specific to regions of the visual cortex with retinotopic representations of the mouth of the talker...
February 27, 2018: ELife
https://www.readbyqxmd.com/read/29383400/individual-differences-and-the-effect-of-face-configuration-information-in-the-mcgurk-effect
#17
Yuta Ujiie, Tomohisa Asai, Akio Wakabayashi
The McGurk effect, which denotes the influence of visual information on audiovisual speech perception, is less frequently observed in individuals with autism spectrum disorder (ASD) compared to those without it; the reason for this remains unclear. Several studies have suggested that facial configuration context might play a role in this difference. More specifically, people with ASD show a local processing bias for faces-that is, they process global face information to a lesser extent. This study examined the role of facial configuration context in the McGurk effect in 46 healthy students...
April 2018: Experimental Brain Research. Experimentelle Hirnforschung. Expérimentation Cérébrale
https://www.readbyqxmd.com/read/29356006/the-functional-and-structural-asymmetries-of-the-superior-temporal-sulcus
#18
Karsten Specht, Philip Wigglesworth
The superior temporal sulcus (STS) is an anatomical structure that increasingly interests researchers. This structure appears to receive multisensory input and is involved in several perceptual and cognitive core functions, such as speech perception, audiovisual integration, (biological) motion processing and theory of mind capacities. In addition, the superior temporal sulcus is not only one of the longest sulci of the brain, but it also shows marked functional and structural asymmetries, some of which have only been found in humans...
February 2018: Scandinavian Journal of Psychology
https://www.readbyqxmd.com/read/29250857/theta-oscillations-reflect-conflict-processing-in-the-perception-of-the-mcgurk-illusion
#19
Luis Morís Fernández, Mireia Torralba, Salvador Soto-Faraco
The McGurk illusion is one of the most famous illustrations of cross-modal integration in human perception. It has been often used as a proxy of audiovisual (AV) integration and to infer the properties of the integration process in natural (congruent) AV conditions. Nonetheless, a blatant difference between McGurk stimuli and natural, congruent, AV speech is the conflict between the auditory and the visual information in the former. Here, we hypothesized that McGurk stimuli (and any AV incongruency) engage brain responses similar to those found in more general cases of perceptual conflict (e...
December 18, 2017: European Journal of Neuroscience
https://www.readbyqxmd.com/read/29163099/a-computational-analysis-of-neural-mechanisms-underlying-the-maturation-of-multisensory-speech-integration-in-neurotypical-children-and-those-on-the-autism-spectrum
#20
Cristiano Cuppini, Mauro Ursino, Elisa Magosso, Lars A Ross, John J Foxe, Sophie Molholm
Failure to appropriately develop multisensory integration (MSI) of audiovisual speech may affect a child's ability to attain optimal communication. Studies have shown protracted development of MSI into late-childhood and identified deficits in MSI in children with an autism spectrum disorder (ASD). Currently, the neural basis of acquisition of this ability is not well understood. Here, we developed a computational model informed by neurophysiology to analyze possible mechanisms underlying MSI maturation, and its delayed development in ASD...
2017: Frontiers in Human Neuroscience
keyword
keyword
159901
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"