keyword
MENU ▼
Read by QxMD icon Read
search

Vowel perception

keyword
https://www.readbyqxmd.com/read/28631952/the-development-of-visual-speech-perception-in-mandarin-chinese-speaking-children
#1
Liang Chen, Jianghua Lei
The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials...
February 14, 2017: Clinical Linguistics & Phonetics
https://www.readbyqxmd.com/read/28618807/the-effect-of-presentation-level-and-stimulation-rate-on-speech-perception-and-modulation-detection-for-cochlear-implant-users
#2
Tim Brochier, Hugh J McDermott, Colette M McKay
In order to improve speech understanding for cochlear implant users, it is important to maximize the transmission of temporal information. The combined effects of stimulation rate and presentation level on temporal information transfer and speech understanding remain unclear. The present study systematically varied presentation level (60, 50, and 40 dBA) and stimulation rate [500 and 2400 pulses per second per electrode (pps)] in order to observe how the effect of rate on speech understanding changes for different presentation levels...
June 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28601721/a-universal-bias-in-adult-vowel-perception-by-ear-or-by-eye
#3
Matthew Masapollo, Linda Polka, Lucie Ménard
Speech perceivers are universally biased toward "focal" vowels (i.e., vowels whose adjacent formants are close in frequency, which concentrates acoustic energy into a narrower spectral region). This bias is demonstrated in phonetic discrimination tasks as a directional asymmetry: a change from a relatively less to a relatively more focal vowel results in significantly better performance than a change in the reverse direction. We investigated whether the critical information for this directional effect is limited to the auditory modality, or whether visible articulatory information provided by the speaker's face also plays a role...
June 8, 2017: Cognition
https://www.readbyqxmd.com/read/28599541/prosodic-exaggeration-within-infant-directed-speech-consequences-for-vowel-learnability
#4
Frans Adriaans, Daniel Swingley
Perceptual experiments with infants show that they adapt their perception of speech sounds toward the categories of the native language. How do infants learn these categories? For the most part, acoustic analyses of natural infant-directed speech have suggested that phonetic categories are not presented to learners as separable clusters of sounds in acoustic space. As a step toward explaining how infants begin to solve this problem, the current study proposes that the exaggerated prosody characteristic of infant-directed speech may highlight for infants certain speech-sound tokens that collectively form more readily identifiable categories...
May 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28595176/individual-talker-and-token-covariation-in-the-production-of-multiple-cues-to-stop-voicing
#5
Meghan Clayards
BACKGROUND/AIMS: Previous research found that individual talkers have consistent differences in the production of segments impacting the perception of their speech by others. Speakers also produce multiple acoustic-phonetic cues to phonological contrasts. Less is known about how multiple cues covary within a phonetic category and across talkers. We examined differences in individual talkers across cues and whether token-by-token variability is a result of intrinsic factors or speaking style by examining within-category correlations...
June 9, 2017: Phonetica
https://www.readbyqxmd.com/read/28574442/electrophysiological-indices-of-audiovisual-speech-perception-in-the-broader-autism-phenotype
#6
Julia Irwin, Trey Avery, Jacqueline Turcios, Lawrence Brancazio, Barbara Cook, Nicole Landi
When a speaker talks, the consequences of this can both be heard (audio) and seen (visual). A novel visual phonemic restoration task was used to assess behavioral discrimination and neural signatures (event-related potentials, or ERP) of audiovisual processing in typically developing children with a range of social and communicative skills assessed using the social responsiveness scale, a measure of traits associated with autism. An auditory oddball design presented two types of stimuli to the listener, a clear exemplar of an auditory consonant-vowel syllable /ba/ (the more frequently occurring standard stimulus), and a syllable in which the auditory cues for the consonant were substantially weakened, creating a stimulus which is more like /a/ (the infrequently presented deviant stimulus)...
June 2, 2017: Brain Sciences
https://www.readbyqxmd.com/read/28554824/the-perception-of-formant-tuning-in-soprano-voices
#7
Rebecca R Vos, Damian T Murphy, David M Howard, Helena Daffern
INTRODUCTION: At the upper end of the soprano range, singers adjust their vocal tract to bring one or more of its resonances (Rn) toward a source harmonic, increasing the amplitude of the sound; this process is known as resonance tuning. This study investigated the perception of (R1) and (R2) tuning, key strategies observed in classically trained soprano voices, which were expected to be preferred by listeners. Furthermore, different vowels were compared, whereas previous investigations have usually focused on a single vowel...
May 26, 2017: Journal of Voice: Official Journal of the Voice Foundation
https://www.readbyqxmd.com/read/28549538/processing-of-word-stress-related-acoustic-information-a-multi-feature-mmn-study
#8
Ferenc Honbolygó, Orsolya Kolozsvári, Valéria Csépe
In the present study, we investigated the processing of word stress related acoustic features in a word context. In a passive oddball multi-feature MMN experiment, we presented a disyllabic pseudo-word with two acoustically similar syllables as standard stimulus, and five contrasting deviants that differed from the standard in that they were either stressed on the first syllable or contained a vowel change. Stress was realized by an increase of f0, intensity, vowel duration or consonant duration. The vowel change was used to investigate if phonemic and prosodic changes elicit different MMN components...
May 23, 2017: International Journal of Psychophysiology
https://www.readbyqxmd.com/read/28471206/the-complementary-roles-of-auditory-and-motor-information-evaluated-in-a-bayesian-perceptuo-motor-model-of-speech-perception
#9
Raphaël Laurent, Marie-Lou Barnaud, Jean-Luc Schwartz, Pierre Bessière, Julien Diard
There is a consensus concerning the view that both auditory and motor representations intervene in the perceptual processing of speech units. However, the question of the functional role of each of these systems remains seldom addressed and poorly understood. We capitalized on the formal framework of Bayesian Programming to develop COSMO (Communicating Objects using Sensory-Motor Operations), an integrative model that allows principled comparisons of purely motor or purely auditory implementations of a speech perception task and tests the gain of efficiency provided by their Bayesian fusion...
May 4, 2017: Psychological Review
https://www.readbyqxmd.com/read/28464681/sensorimotor-adaptation-affects-perceptual-compensation-for-coarticulation
#10
William L Schuerman, Srikantan Nagarajan, James M McQueen, John Houde
A given speech sound will be realized differently depending on the context in which it is produced. Listeners have been found to compensate perceptually for these coarticulatory effects, yet it is unclear to what extent this effect depends on actual production experience. In this study, whether changes in motor-to-sound mappings induced by adaptation to altered auditory feedback can affect perceptual compensation for coarticulation is investigated. Specifically, whether altering how the vowel [i] is produced can affect the categorization of a stimulus continuum between an alveolar and a palatal fricative whose interpretation is dependent on vocalic context is tested...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464659/modulation-of-auditory-motor-learning-in-response-to-formant-perturbation-as-a-function-of-delayed-auditory-feedback
#11
Takashi Mitsuya, Kevin G Munhall, David W Purcell
The interaction of language production and perception has been substantiated by empirical studies where speakers compensate their speech articulation in response to the manipulated sound of their voice heard in real-time as auditory feedback. A recent study by Max and Maffett [(2015). Neurosci. Lett. 591, 25-29] reported an absence of compensation (i.e., auditory-motor learning) for frequency-shifted formants when auditory feedback was delayed by 100 ms. In the present study, the effect of auditory feedback delay was studied when only the first formant was manipulated while delaying auditory feedback systematically...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28464636/directional-asymmetries-reveal-a-universal-bias-in-adult-vowel-perception
#12
Matthew Masapollo, Linda Polka, Monika Molnar, Lucie Ménard
Research on cross-language vowel perception in both infants and adults has shown that for many vowel contrasts, discrimination is easier when the same pair of vowels is presented in one direction compared to the reverse direction. According to one account, these directional asymmetries reflect a universal bias favoring "focal" vowels (i.e., vowels whose adjacent formants are close in frequency, which concentrates acoustic energy into a narrower spectral region). An alternative, but not mutually exclusive, account is that such effects reflect an experience-dependent bias favoring prototypical instances of native-language vowel categories...
April 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28443053/perceptual-improvement-of-lexical-tones-in-infants-effects-of-tone-language-experience
#13
Feng-Ming Tsao
To learn words in a tonal language, tone-language learners should not only develop better abilities for perceiving consonants and vowels, but also for lexical tones. The divergent trend of enhancing sensitivity to native phonetic contrasts and reduced sensitivity to non-native phonetic contrast is theoretically essential to evaluate effects of listening to an ambient language on speech perception development. The loss of sensitivity in discriminating lexical tones among non-tonal language-learning infants was apparent between 6 and 12 months of age, but only few studies examined trends of differentiating native lexical tones in infancy...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/28441570/neural-indices-of-phonemic-discrimination-and-sentence-level-speech-intelligibility-in-quiet-and-noise-a-p3-study
#14
Tess K Koerner, Yang Zhang, Peggy B Nelson, Boxiang Wang, Hui Zou
This study examined how speech babble noise differentially affected the auditory P3 responses and the associated neural oscillatory activities for consonant and vowel discrimination in relation to segmental- and sentence-level speech perception in noise. The data were collected from 16 normal-hearing participants in a double-oddball paradigm that contained a consonant (/ba/ to /da/) and vowel (/ba/ to /bu/) change in quiet and noise (speech-babble background at a -3 dB signal-to-noise ratio) conditions. Time-frequency analysis was applied to obtain inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) measures in delta, theta, and alpha frequency bands for the P3 response...
April 18, 2017: Hearing Research
https://www.readbyqxmd.com/read/28439232/mapping-the-speech-code-cortical-responses-linking-the-perception-and-production-of-vowels
#15
William L Schuerman, Antje S Meyer, James M McQueen
The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded...
2017: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/28395548/incorporating-ceiling-effects-during-analysis-of-speech-perception-data-from-a-paediatric-cochlear-implant-cohort
#16
Hanneke Bruijnzeel, Guido Cattani, Inge Stegeman, Vedat Topsakal, Wilko Grolman
OBJECTIVE: To compare speech perception between children with a different age at cochlear implantation. DESIGN: We evaluated speech perception by comparing consonant-vowel-consonant (auditory) (CVC(A)) scores at five-year follow-up of children implanted between 1997 and 2010. The proportion of children from each age-at-implantation group reaching the 95%CI of CVC(A) ceiling scores (>95%) was calculated to identify speech perception differences masked by ceiling effects...
April 10, 2017: International Journal of Audiology
https://www.readbyqxmd.com/read/28382120/outcomes-of-late-implantation-in-usher-syndrome-patients
#17
Ana Cristina H Hoshino, Agustina Echegoyen, Maria Valéria Schmidt Goffi-Gomez, Robinson Koji Tsuji, Ricardo Ferreira Bento
Introduction Usher syndrome (US) is an autosomal recessive disorder characterized by hearing loss and progressive visual impairment. Some deaf Usher syndrome patients learn to communicate using sign language. During adolescence, as they start losing vision, they are usually referred to cochlear implantation as a salvage for their new condition. Is a late implantation beneficial to these children? Objective The objective of this study is to describe the outcomes of US patients who received cochlear implants at a later age...
April 2017: International Archives of Otorhinolaryngology
https://www.readbyqxmd.com/read/28372061/seeing-closing-gesture-of-articulators-affects-speech-perception-of-geminate-consonants
#18
Takayuki Arai, Eri Iwagami, Emi Yanagisawa
This study tests the perception of geminate consonants for native speakers of Japanese using audio and visual information. A previous study showed that formant transitions associated with the closing gesture of articulators at the end of a preceding vowel are crucial for perception of stop geminate consonants in Japanese. In addition, this study further focuses on visual cues, to test if seeing the closing gesture affects perception of geminate consonants. Based on a perceptual experiment, it is observed that visual information can compensate for a deficiency in geminate consonant auditory information, such as formant transitions...
March 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28372055/assessing-the-efficacy-of-hearing-aid-amplification-using-a-phoneme-test
#19
Christoph Scheidiger, Jont B Allen, Torsten Dau
Consonant-vowel (CV) perception experiments provide valuable insights into how humans process speech. Here, two CV identification experiments were conducted in a group of hearing-impaired (HI) listeners, using 14 consonants followed by the vowel /ɑ/. The CVs were presented in quiet and with added speech-shaped noise at signal-to-noise ratios of 0, 6, and 12 dB. The HI listeners were provided with two different amplification schemes for the CVs. In the first experiment, a frequency-independent amplification (flat-gain) was provided and the CVs were presented at the most-comfortable loudness level...
March 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28362674/human-frequency-following-responses-to-vocoded-speech
#20
Saradha Ananthakrishnan, Xin Luo, Ananthanarayan Krishnan
OBJECTIVES: Vocoders offer an effective platform to simulate the effects of cochlear implant speech processing strategies in normal-hearing listeners. Several behavioral studies have examined the effects of varying spectral and temporal cues on vocoded speech perception; however, little is known about the neural indices of vocoded speech perception. Here, the scalp-recorded frequency following response (FFR) was used to study the effects of varying spectral and temporal cues on brainstem neural representation of specific acoustic cues, the temporal envelope periodicity related to fundamental frequency (F0) and temporal fine structure (TFS) related to formant and formant-related frequencies, as reflected in the phase-locked neural activity in response to vocoded speech...
March 30, 2017: Ear and Hearing
keyword
keyword
48032
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"