Read by QxMD icon Read

Vowel perception

Patti Adank, Helen Nuttall, Harold Bekkering, Gwijde Maegherman
When we observe someone else speaking, we tend to automatically activate the corresponding speech motor patterns. When listening, we therefore covertly imitate the observed speech. Simulation theories of speech perception propose that covert imitation of speech motor patterns supports speech perception. Covert imitation of speech has been studied with interference paradigms, including the stimulus-response compatibility paradigm (SRC). The SRC paradigm measures covert imitation by comparing articulation of a prompt following exposure to a distracter...
March 13, 2018: Attention, Perception & Psychophysics
Wei Tang, Xi-Jian Wang, Jia-Qi Li, Chang Liu, Qi Dong, Yun Nan
Music and language are two intricately linked communication modalities in humans. A deficit in music pitch processing as manifested in the condition of congenital amusia has been related to difficulties in lexical tone processing for both tone and non-tonal languages. However, it is still unclear whether amusia also affects the perception of vowel phonemes in quiet and in noise. In this study, we examined vowel-plus-tone identification in quiet and noise conditions among Mandarin-speaking amusics with and without speech tone difficulties (tone agnosics and pure amusics, respectively), and IQ- and age-matched controls...
March 6, 2018: Hearing Research
Julie G Arenberg, Wendy S Parkinson, Leonid Litvak, Chen Chen, Heather A Kreft, Andrew J Oxenham
OBJECTIVES: The standard, monopolar (MP) electrode configuration used in commercially available cochlear implants (CI) creates a broad electrical field, which can lead to unwanted channel interactions. Use of more focused configurations, such as tripolar and phased array, has led to mixed results for improving speech understanding. The purpose of the present study was to assess the efficacy of a physiologically inspired configuration called dynamic focusing, using focused tripolar stimulation at low levels and less focused stimulation at high levels...
March 9, 2018: Ear and Hearing
Matthew Masapollo, Linda Polka, Lucie Ménard, Lauren Franklin, Mark Tiede, James Morgan
Masapollo, Polka, and Ménard (2017) recently reported a robust directional asymmetry in unimodal visual vowel perception: Adult perceivers discriminate a change from an English /u/ viseme to a French /u/ viseme significantly better than a change in the reverse direction. This asymmetry replicates a frequent pattern found in unimodal auditory vowel perception that points to a universal bias favoring more extreme vocalic articulations, which lead to acoustic signals with increased formant convergence. In the present article, the authors report 5 experiments designed to investigate whether this asymmetry in the visual realm reflects a speech-specific or general processing bias...
March 8, 2018: Journal of Experimental Psychology. Human Perception and Performance
Shunsuke Tamura, Kazuhito Ito, Nobuyuki Hirose, Shuji Mori
Purpose: The purpose of this study was to investigate the psychophysical boundary used for categorization of voiced-voiceless stop consonants in native Japanese speakers. Method: Twelve native Japanese speakers participated in the experiment. The stimuli were synthetic stop consonant-vowel stimuli varying in voice onset time (VOT) with manipulation of the amplitude of the initial noise portion and the first formant (F1) frequency of the periodic portion. There were 3 tasks, namely, speech identification to either /d/ or /t/, detection of the noise portion, and simultaneity judgment of onsets of the noise and periodic portions...
March 5, 2018: Journal of Speech, Language, and Hearing Research: JSLHR
Christian E Stilp, Ashley A Assgari
Speech perception is heavily influenced by surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, this can produce spectral contrast effects (SCEs) that bias perception of later sounds. For example, when context sounds have more energy in low-F1 frequency regions, listeners report more high-F1 responses to a target vowel, and vice versa. SCEs have been reported using various approaches for a wide range of stimuli, but most often, large spectral peaks were added to the context to bias speech categorization...
February 28, 2018: Attention, Perception & Psychophysics
Paula B García, Karen Froud
Research on American-English (AE) vowel perception by Spanish-English bilinguals has focused on the vowels /i/-/ɪ/ (e.g., in sheep / ship ). Other AE vowel contrasts may present perceptual challenges for this population, especially those requiring both spectral and durational discrimination. We used Event-Related Potentials (ERPs), MMN (Mismatch Negativity) and P300, to index discrimination of AE vowels /ɑ/-/ʌ/ by sequential adult Spanish-English bilingual listeners compared to AE monolinguals. Listening tasks were non-attended and attended, and vowels were presented with natural and neutralized durations...
January 2018: Bilingualism: Language and Cognition
Ja Young Choi, Elly R Hu, Tyler K Perrachione
The nondeterministic relationship between speech acoustics and abstract phonemic representations imposes a challenge for listeners to maintain perceptual constancy despite the highly variable acoustic realization of speech. Talker normalization facilitates speech processing by reducing the degrees of freedom for mapping between encountered speech and phonemic representations. While this process has been proposed to facilitate the perception of ambiguous speech sounds, it is currently unknown whether talker normalization is affected by the degree of potential ambiguity in acoustic-phonemic mapping...
February 7, 2018: Attention, Perception & Psychophysics
Emmanuel Ponsot, Pablo Arias, Jean-Julien Aucouturier
Which spectral cues underlie the perceptual processing of smiles in speech? Here, the question was addressed using reverse-correlation in the case of the isolated vowel [a]. Listeners were presented with hundreds of pairs of utterances with randomly manipulated spectral characteristics and were asked to indicate, in each pair, which was the most smiling. The analyses revealed that they relied on robust spectral representations that specifically encoded vowel's formants. These findings demonstrate the causal role played by formants in the perception of smile...
January 2018: Journal of the Acoustical Society of America
Jamie L Desjardins, Francisco Fernandez
Purpose: Bilingual individuals have been shown to be more proficient on visual tasks of inhibition compared with their monolingual counterparts. However, the bilingual advantage has not been evidenced in all studies, and very little is known regarding how bilingualism influences inhibitory control in the perception of auditory information. The purpose of the current study was to examine inhibition of irrelevant information using auditory and visual tasks in English monolingual and Spanish-English bilingual adults...
January 30, 2018: Journal of Speech, Language, and Hearing Research: JSLHR
Xiaochen Zhang, Xiaolin Li, Jingjing Chen, Qin Gong
Since sound perception takes place against a background with a certain amount of noise, both speech and non-speech processing involve extraction of target signals and suppression of background noise. Previous works on early processing of speech phonemes largely neglected how background noise is encoded and suppressed. This study aimed to fill in this gap. We adopted an oddball paradigm where speech (vowels) or non-speech stimuli (complex tones) were presented with or without a background of amplitude-modulated noise and analyzed cortical responses related to foreground stimulus processing, including mismatch negativity (MMN), N2b, and P300, as well as neural representations of the background noise, i...
January 11, 2018: Neuroscience
Judith Schmitz, Eleonora Bartoli, Laura Maffongelli, Luciano Fadiga, Nuria Sebastian-Galles, Alessandro D'Ausilio
Listening to speech has been shown to activate motor regions, as measured by corticobulbar excitability. In this experiment, we explored if motor regions are also recruited during listening to non-native speech, for which we lack both sensory and motor experience. By administering Transcranial Magnetic Stimulation (TMS) over the left motor cortex we recorded corticobulbar excitability of the lip muscles when Italian participants listened to native-like and non-native German vowels. Results showed that lip corticobulbar excitability increased for a combination of lip use during articulation and non-nativeness of the vowels...
January 6, 2018: Neuropsychologia
Ali Abavisani, Jont B Allen
The goal of this study is to provide a metric for evaluating a given hearing-aid insertion gain using a consonant recognition based measure. The basic question addressed is how treatment impacts phone recognition at the token level, relative to a flat insertion gain, at the most-comfortable-level (MCL). These tests are directed at fine-tuning a treatment, with the ultimate goal of improving speech perception, and to identify when a hearing level gain-based treatment degrades phone recognition. Eight subjects with hearing loss were tested under two conditions: flat-gain and a treatment insertion gain, based on subject's hearing level...
December 2017: Journal of the Acoustical Society of America
Olga Dmitrieva
The present study seeks to answer the question of whether consonant duration is perceived differently across consonants of different manners of articulation and in different contextual environments and whether such differences may be related to the typology of geminates. The results of the cross-linguistic identification experiment suggest higher perceptual acuity in labeling short and long consonants in sonorants than in obstruents. Duration categories were also more consistently and clearly labeled in the intervocalic than in the preconsonantal environment, in the word-initial than in the word-final position, and after stressed vowels than between unstressed vowels...
March 1, 2017: Language and Speech
Irina A Shport
Purpose: The goal of this study was to test whether fronting and lengthening of lax vowels influence the perception of femininity in listeners whose dialect is characterized as already having relatively fronted and long lax vowels in male and female speech. Method: Sixteen English words containing the /ɪ ɛ ʊ ɑ/ vowels were produced by a male speaker with 2 degrees of vowel fronting. Then, the vowel duration was manipulated in 3 steps. Thirty-nine listeners from the Southern United States judged how feminine each word sounded to them on an interval scale...
December 27, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
Luodi Yu, Yang Zhang
A current topic in auditory neurophysiology is how brainstem sensory coding contributes to higher-level perceptual, linguistic and cognitive skills. This cross-language study was designed to compare frequency following responses (FFRs) for lexical tones in tonal (Mandarin Chinese) and non-tonal (English) language users and test the correlational strength between FFRs and behavior as a function of language experience. The behavioral measures were obtained in the Garner paradigm to assess how lexical tones might interfere with vowel category and duration judgement...
December 12, 2017: Neuropsychologia
Deborah Moncrieff, Lauren Dubyne
Purpose: This study investigated the influence of voice onset time (VOT) on the perception of consonant-vowel (CV) signals during a dichotic listening (DL) task. Method: Sixty-two young adults with normal hearing were tested with the English language version of the Hugdahl Dichotic CV (DCV) Test. They were asked to identify 1 CV syllable during 3 DL conditions: free recall (report the syllable heard most clearly), forced right (report the syllable in the right ear), and forced left (report the syllable in the left ear)...
December 12, 2017: American Journal of Audiology
Alessandra Cecilia Rampinini, Giacomo Handjaras, Andrea Leo, Luca Cecchetti, Emiliano Ricciardi, Giovanna Marotta, Pietro Pietrini
Classical models of language localize speech perception in the left superior temporal and production in the inferior frontal cortex. Nonetheless, neuropsychological, structural and functional studies have questioned such subdivision, suggesting an interwoven organization of the speech function within these cortices. We tested whether sub-regions within frontal and temporal speech-related areas retain specific phonological representations during both perception and production. Using functional magnetic resonance imaging and multivoxel pattern analysis, we showed functional and spatial segregation across the left fronto-temporal cortex during listening, imagery and production of vowels...
December 5, 2017: Scientific Reports
Y I Sumita, M Hattori, M Murase, M E Elbashti, H Taniguchi
Among the functional disabilities that patients face following maxillectomy, speech impairment is a major factor influencing quality of life. Proper rehabilitation of speech, which may include prosthodontic and surgical treatments and speech therapy, requires accurate evaluation of speech intelligibility (SI). A simple, less time-consuming yet accurate evaluation is desirable both for maxillectomy patients and the various clinicians providing maxillofacial treatment. This study sought to determine the utility of digital acoustic analysis of vowels for the prediction of SI in maxillectomy patients, based on a comprehensive understanding of speech production in the vocal tract of maxillectomy patients and its perception...
March 2018: Journal of Oral Rehabilitation
Johannes Zaar, Nicola Schmitt, Ralph-Peter Derleth, Mishaela DiNino, Julie G Arenberg, Torsten Dau
This study investigated the influence of hearing-aid (HA) and cochlear-implant (CI) processing on consonant perception in normal-hearing (NH) listeners. Measured data were compared to predictions obtained with a speech perception model [Zaar and Dau (2017). J. Acoust. Soc. Am. 141, 1051-1064] that combines an auditory processing front end with a correlation-based template-matching back end. In terms of HA processing, effects of strong nonlinear frequency compression and impulse-noise suppression were measured in 10 NH listeners using consonant-vowel stimuli...
November 2017: Journal of the Acoustical Society of America
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"