keyword
MENU ▼
Read by QxMD icon Read
search

Vowel perception

keyword
https://www.readbyqxmd.com/read/28726592/effect-of-extreme-adaptive-frequency-compression-in-bimodal-listeners-on-sound-localization-and-speech-perception
#1
Lidwien C E Veugen, Josef Chalupper, Lucas H M Mens, Ad F M Snik, A John van Opstal
OBJECTIVES: This study aimed to improve access to high-frequency interaural level differences (ILD), by applying extreme frequency compression (FC) in the hearing aid (HA) of 13 bimodal listeners, using a cochlear implant (CI) and conventional HA in opposite ears. DESIGN: An experimental signal-adaptive frequency-lowering algorithm was tested, compressing frequencies above 160 Hz into the individual audible range of residual hearing, but only for consonants (adaptive FC), thus protecting vowel formants, with the aim to preserve speech perception...
July 20, 2017: Cochlear Implants International
https://www.readbyqxmd.com/read/28715718/when-speaker-identity-is-unavoidable-neural-processing-of-speaker-identity-cues-in-natural-speech
#2
Alba Tuninetti, Kateřina Chládková, Varghese Peter, Niels O Schiller, Paola Escudero
Speech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent differences, in order to comprehend the linguistic message, e.g. to correctly identify a speech sound or a word. Here we tested whether the process of normalizing speaker and accent differences, facilitating the recognition of linguistic information, is found at the level of neural processing, and whether it is modulated by the listeners' native language...
July 14, 2017: Brain and Language
https://www.readbyqxmd.com/read/28701977/foreign-languages-sound-fast-evidence-from-implicit-rate-normalization
#3
Hans Rutger Bosker, Eva Reinisch
Anecdotal evidence suggests that unfamiliar languages sound faster than one's native language. Empirical evidence for this impression has, so far, come from explicit rate judgments. The aim of the present study was to test whether such perceived rate differences between native and foreign languages (FLs) have effects on implicit speech processing. Our measure of implicit rate perception was "normalization for speech rate": an ambiguous vowel between short /a/ and long /a:/ is interpreted as /a:/ following a fast but as /a/ following a slow carrier sentence...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/28691934/comparison-of-different-hearing-aid-prescriptions-for-children
#4
Josephine E Marriage, Deborah A Vickers, Thomas Baer, Brian R Glasberg, Brian C J Moore
OBJECTIVES: To assess whether there are significant differences between speech scores for different hearing aid prescription methods, specifically DSL i/o, DSL V, and NAL-NL1, using age-appropriate closed- and open-set speech tests with young children, designed to avoid floor and ceiling effects. DESIGN: Participants were 44 children with moderate or severe bilateral hearing loss, 8 aged 2 to 3 years, 15 aged 4 to 5 years, and 21 aged 6 to 9 years. Children wore bilateral hearing aids fitted with each prescription method in turn in a balanced double-blind design...
July 6, 2017: Ear and Hearing
https://www.readbyqxmd.com/read/28687065/second-language-perception-of-mandarin-vowels-and-tones
#5
Yen-Chen Hao
This study examines the discrimination of Mandarin vowels and tones by native English speakers with varying amounts of Mandarin experience, aiming to investigate the relative difficulty of these two types of sounds for English speakers at different learning stages, and the source of their difficulty. Seventeen advanced learners of Mandarin (Ex group), eighteen beginning learners (InEx group), and eighteen English speakers naïve to Mandarin (Naïve group) participated in an AXB discrimination task. The stimuli were two Mandarin vowel contrasts, /li-ly/ and /lu-ly/, and two tonal contrasts, T1-T4 and T2-T3...
July 1, 2017: Language and Speech
https://www.readbyqxmd.com/read/28679275/an-investigation-of-the-systematic-use-of-spectral-information-in-the-determination-of-apparent-talker-height
#6
Santiago Barreda
The perception of apparent-talker height is mostly determined by the fundamental frequency (f0) and spectral characteristics of a voice. Although it is traditionally thought that spectral cues affect apparent-talker height by influencing apparent vocal-tract length, a recent experiment [Barreda (2016). J. Phon. 55, 1-18] suggests that apparent-talker height can vary significantly within-talker on the basis of phonemically-determined spectral variability. In this experiment, listeners were asked to estimate the height of 10 female talkers based on manipulated natural productions of bVd words containing one of /i æ ɑ u ɝ/...
June 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28678819/compensations-to-auditory-feedback-perturbations-in-congenitally-blind-and-sighted-speakers-acoustic-and-articulatory-data
#7
Pamela Trudeau-Fisette, Mark Tiede, Lucie Ménard
This study investigated the effects of visual deprivation on the relationship between speech perception and production by examining compensatory responses to real-time perturbations in auditory feedback. Specifically, acoustic and articulatory data were recorded while sighted and congenitally blind French speakers produced several repetitions of the vowel /ø/. At the acoustic level, blind speakers produced larger compensatory responses to altered vowels than their sighted peers. At the articulatory level, blind speakers also produced larger displacements of the upper lip, the tongue tip, and the tongue dorsum in compensatory responses...
2017: PloS One
https://www.readbyqxmd.com/read/28671991/effects-of-stimulus-duration-and-vowel-quality-in-cross-linguistic-categorical-perception-of-pitch-directions
#8
Si Chen, Yiqing Zhu, Ratree Wayland
We investigated categorical perception of rising and falling pitch contours by tonal and non-tonal listeners. Specifically, we determined minimum durations needed to perceive both contours and compared to those of production, how stimuli duration affects their perception, whether there is an intrinsic F0 effect, and how first language background, duration, directions of pitch and vowel quality interact with each other. Continua of fundamental frequency on different vowels with 9 duration values were created for identification and discrimination tasks...
2017: PloS One
https://www.readbyqxmd.com/read/28669914/neuromagnetic-correlates-of-voice-pitch-vowel-type-and-speaker-size-in-auditory-cortex
#9
Martin Andermann, Roy D Patterson, Carolin Vogt, Lisa Winterstetter, André Rupp
Vowel recognition is largely immune to differences in speaker size despite the waveform differences associated with variation in speaker size. This has led to the suggestion that voice pitch and mean formant frequency (MFF) are extracted early in the hierarchy of hearing/speech processing and used to normalize the internal representation of vowel sounds. This paper presents a magnetoencephalographic (MEG) experiment designed to locate and compare neuromagnetic activity associated with voice pitch, MFF and vowel type in human auditory cortex...
June 29, 2017: NeuroImage
https://www.readbyqxmd.com/read/28651255/visual-cues-contribute-differentially-to-audiovisual-perception-of-consonants-and-vowels-in-improving-recognition-and-reducing-cognitive-demands-in-listeners-with-hearing-impairment-using-hearing-aids
#10
Shahram Moradi, Björn Lidestam, Henrik Danielsson, Elaine Hoi Ning Ng, Jerker Rönnberg
Purpose: We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels-in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands-in listeners with hearing impairment using hearing aids. Method: The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss...
June 23, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/28631952/the-development-of-visual-speech-perception-in-mandarin-chinese-speaking-children
#11
Liang Chen, Jianghua Lei
The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials...
February 14, 2017: Clinical Linguistics & Phonetics
https://www.readbyqxmd.com/read/28618807/the-effect-of-presentation-level-and-stimulation-rate-on-speech-perception-and-modulation-detection-for-cochlear-implant-users
#12
Tim Brochier, Hugh J McDermott, Colette M McKay
In order to improve speech understanding for cochlear implant users, it is important to maximize the transmission of temporal information. The combined effects of stimulation rate and presentation level on temporal information transfer and speech understanding remain unclear. The present study systematically varied presentation level (60, 50, and 40 dBA) and stimulation rate [500 and 2400 pulses per second per electrode (pps)] in order to observe how the effect of rate on speech understanding changes for different presentation levels...
June 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28601721/a-universal-bias-in-adult-vowel-perception-by-ear-or-by-eye
#13
Matthew Masapollo, Linda Polka, Lucie Ménard
Speech perceivers are universally biased toward "focal" vowels (i.e., vowels whose adjacent formants are close in frequency, which concentrates acoustic energy into a narrower spectral region). This bias is demonstrated in phonetic discrimination tasks as a directional asymmetry: a change from a relatively less to a relatively more focal vowel results in significantly better performance than a change in the reverse direction. We investigated whether the critical information for this directional effect is limited to the auditory modality, or whether visible articulatory information provided by the speaker's face also plays a role...
September 2017: Cognition
https://www.readbyqxmd.com/read/28599541/prosodic-exaggeration-within-infant-directed-speech-consequences-for-vowel-learnability
#14
Frans Adriaans, Daniel Swingley
Perceptual experiments with infants show that they adapt their perception of speech sounds toward the categories of the native language. How do infants learn these categories? For the most part, acoustic analyses of natural infant-directed speech have suggested that phonetic categories are not presented to learners as separable clusters of sounds in acoustic space. As a step toward explaining how infants begin to solve this problem, the current study proposes that the exaggerated prosody characteristic of infant-directed speech may highlight for infants certain speech-sound tokens that collectively form more readily identifiable categories...
May 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28595176/individual-talker-and-token-covariation-in-the-production-of-multiple-cues-to-stop-voicing
#15
Meghan Clayards
BACKGROUND/AIMS: Previous research found that individual talkers have consistent differences in the production of segments impacting the perception of their speech by others. Speakers also produce multiple acoustic-phonetic cues to phonological contrasts. Less is known about how multiple cues covary within a phonetic category and across talkers. We examined differences in individual talkers across cues and whether token-by-token variability is a result of intrinsic factors or speaking style by examining within-category correlations...
June 9, 2017: Phonetica
https://www.readbyqxmd.com/read/28574442/electrophysiological-indices-of-audiovisual-speech-perception-in-the-broader-autism-phenotype
#16
Julia Irwin, Trey Avery, Jacqueline Turcios, Lawrence Brancazio, Barbara Cook, Nicole Landi
When a speaker talks, the consequences of this can both be heard (audio) and seen (visual). A novel visual phonemic restoration task was used to assess behavioral discrimination and neural signatures (event-related potentials, or ERP) of audiovisual processing in typically developing children with a range of social and communicative skills assessed using the social responsiveness scale, a measure of traits associated with autism. An auditory oddball design presented two types of stimuli to the listener, a clear exemplar of an auditory consonant-vowel syllable /ba/ (the more frequently occurring standard stimulus), and a syllable in which the auditory cues for the consonant were substantially weakened, creating a stimulus which is more like /a/ (the infrequently presented deviant stimulus)...
June 2, 2017: Brain Sciences
https://www.readbyqxmd.com/read/28554824/the-perception-of-formant-tuning-in-soprano-voices
#17
Rebecca R Vos, Damian T Murphy, David M Howard, Helena Daffern
INTRODUCTION: At the upper end of the soprano range, singers adjust their vocal tract to bring one or more of its resonances (Rn) toward a source harmonic, increasing the amplitude of the sound; this process is known as resonance tuning. This study investigated the perception of (R1) and (R2) tuning, key strategies observed in classically trained soprano voices, which were expected to be preferred by listeners. Furthermore, different vowels were compared, whereas previous investigations have usually focused on a single vowel...
May 26, 2017: Journal of Voice: Official Journal of the Voice Foundation
https://www.readbyqxmd.com/read/28549538/processing-of-word-stress-related-acoustic-information-a-multi-feature-mmn-study
#18
Ferenc Honbolygó, Orsolya Kolozsvári, Valéria Csépe
In the present study, we investigated the processing of word stress related acoustic features in a word context. In a passive oddball multi-feature MMN experiment, we presented a disyllabic pseudo-word with two acoustically similar syllables as standard stimulus, and five contrasting deviants that differed from the standard in that they were either stressed on the first syllable or contained a vowel change. Stress was realized by an increase of f0, intensity, vowel duration or consonant duration. The vowel change was used to investigate if phonemic and prosodic changes elicit different MMN components...
May 23, 2017: International Journal of Psychophysiology
https://www.readbyqxmd.com/read/28471206/the-complementary-roles-of-auditory-and-motor-information-evaluated-in-a-bayesian-perceptuo-motor-model-of-speech-perception
#19
Raphaël Laurent, Marie-Lou Barnaud, Jean-Luc Schwartz, Pierre Bessière, Julien Diard
There is a consensus concerning the view that both auditory and motor representations intervene in the perceptual processing of speech units. However, the question of the functional role of each of these systems remains seldom addressed and poorly understood. We capitalized on the formal framework of Bayesian Programming to develop COSMO (Communicating Objects using Sensory-Motor Operations), an integrative model that allows principled comparisons of purely motor or purely auditory implementations of a speech perception task and tests the gain of efficiency provided by their Bayesian fusion...
May 4, 2017: Psychological Review
https://www.readbyqxmd.com/read/28464681/sensorimotor-adaptation-affects-perceptual-compensation-for-coarticulation
#20
William L Schuerman, Srikantan Nagarajan, James M McQueen, John Houde
A given speech sound will be realized differently depending on the context in which it is produced. Listeners have been found to compensate perceptually for these coarticulatory effects, yet it is unclear to what extent this effect depends on actual production experience. In this study, whether changes in motor-to-sound mappings induced by adaptation to altered auditory feedback can affect perceptual compensation for coarticulation is investigated. Specifically, whether altering how the vowel [i] is produced can affect the categorization of a stimulus continuum between an alveolar and a palatal fricative whose interpretation is dependent on vocalic context is tested...
April 2017: Journal of the Acoustical Society of America
keyword
keyword
48032
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"