keyword
MENU ▼
Read by QxMD icon Read
search

Vowel perception

keyword
https://www.readbyqxmd.com/read/28803269/the-callosal-relay-model-of-interhemispheric-communication-new-evidence-from-effective-connectivity-analysis
#1
Saskia Steinmann, Jan Meier, Guido Nolte, Andreas K Engel, Gregor Leicht, Christoph Mulert
Interhemispheric auditory connectivity via the corpus callosum has been demonstrated to be important for normal speech processing. According to the callosal relay model, directed information flow from the right to the left auditory cortex has been suggested, but this has not yet been proven. For this purpose, 33 healthy participants were investigated with 64-channel EEG while performing the dichotic listening task in which two different consonant-vowel syllables were presented simultaneously to the left (LE) and right ear (RE)...
August 12, 2017: Brain Topography
https://www.readbyqxmd.com/read/28799983/discrimination-of-voice-pitch-and-vocal-tract-length-in-cochlear-implant-users
#2
Etienne Gaudrain, Deniz Başkent
OBJECTIVES: When listening to two competing speakers, normal-hearing (NH) listeners can take advantage of voice differences between the speakers. Users of cochlear implants (CIs) have difficulty in perceiving speech on speech. Previous literature has indicated sensitivity to voice pitch (related to fundamental frequency, F0) to be poor among implant users, while sensitivity to vocal-tract length (VTL; related to the height of the speaker and formant frequencies), the other principal voice characteristic, has not been directly investigated in CIs...
August 9, 2017: Ear and Hearing
https://www.readbyqxmd.com/read/28769836/familiarity-and-voice-representation-from-acoustic-based-representation-to-voice-averages
#3
Maureen Fontaine, Scott A Love, Marianne Latinus
The ability to recognize an individual from their voice is a widespread ability with a long evolutionary history. Yet, the perceptual representation of familiar voices is ill-defined. In two experiments, we explored the neuropsychological processes involved in the perception of voice identity. We specifically explored the hypothesis that familiar voices (trained-to-familiar (Experiment 1), and famous voices (Experiment 2)) are represented as a whole complex pattern, well approximated by the average of multiple utterances produced by a single speaker...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/28764439/comparing-malleability-of-phonetic-category-between-i-and-u
#4
Reiko Kataoka, Hahn Koo
This study reports differential category retuning effect between [i] and [u]. Two groups of American listeners were exposed to ambiguous vowels ([i/u]) within words that index a phoneme /i/ (e.g., athl[i/u]t) (i-group) or /u/ (e.g., aftern[i/u]n) (u-group). Before and after the exposure these listeners categorized sounds from a [bip]-[bup] continuum. The i-group significantly increased /bip/ responses after exposure, but the u-group did not change their responses significantly. These results suggest that the way mental representation handles phonetic variation may influence malleability of each category, highlighting the complex relationship among distribution of sounds, their mental representation, and speech perception...
July 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28726592/effect-of-extreme-adaptive-frequency-compression-in-bimodal-listeners-on-sound-localization-and-speech-perception
#5
Lidwien C E Veugen, Josef Chalupper, Lucas H M Mens, Ad F M Snik, A John van Opstal
OBJECTIVES: This study aimed to improve access to high-frequency interaural level differences (ILD), by applying extreme frequency compression (FC) in the hearing aid (HA) of 13 bimodal listeners, using a cochlear implant (CI) and conventional HA in opposite ears. DESIGN: An experimental signal-adaptive frequency-lowering algorithm was tested, compressing frequencies above 160 Hz into the individual audible range of residual hearing, but only for consonants (adaptive FC), thus protecting vowel formants, with the aim to preserve speech perception...
July 20, 2017: Cochlear Implants International
https://www.readbyqxmd.com/read/28715718/when-speaker-identity-is-unavoidable-neural-processing-of-speaker-identity-cues-in-natural-speech
#6
Alba Tuninetti, Kateřina Chládková, Varghese Peter, Niels O Schiller, Paola Escudero
Speech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent differences, in order to comprehend the linguistic message, e.g. to correctly identify a speech sound or a word. Here we tested whether the process of normalizing speaker and accent differences, facilitating the recognition of linguistic information, is found at the level of neural processing, and whether it is modulated by the listeners' native language...
July 14, 2017: Brain and Language
https://www.readbyqxmd.com/read/28701977/foreign-languages-sound-fast-evidence-from-implicit-rate-normalization
#7
Hans Rutger Bosker, Eva Reinisch
Anecdotal evidence suggests that unfamiliar languages sound faster than one's native language. Empirical evidence for this impression has, so far, come from explicit rate judgments. The aim of the present study was to test whether such perceived rate differences between native and foreign languages (FLs) have effects on implicit speech processing. Our measure of implicit rate perception was "normalization for speech rate": an ambiguous vowel between short /a/ and long /a:/ is interpreted as /a:/ following a fast but as /a/ following a slow carrier sentence...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/28691934/comparison-of-different-hearing-aid-prescriptions-for-children
#8
Josephine E Marriage, Deborah A Vickers, Thomas Baer, Brian R Glasberg, Brian C J Moore
OBJECTIVES: To assess whether there are significant differences between speech scores for different hearing aid prescription methods, specifically DSL i/o, DSL V, and NAL-NL1, using age-appropriate closed- and open-set speech tests with young children, designed to avoid floor and ceiling effects. DESIGN: Participants were 44 children with moderate or severe bilateral hearing loss, 8 aged 2 to 3 years, 15 aged 4 to 5 years, and 21 aged 6 to 9 years. Children wore bilateral hearing aids fitted with each prescription method in turn in a balanced double-blind design...
July 6, 2017: Ear and Hearing
https://www.readbyqxmd.com/read/28687065/second-language-perception-of-mandarin-vowels-and-tones
#9
Yen-Chen Hao
This study examines the discrimination of Mandarin vowels and tones by native English speakers with varying amounts of Mandarin experience, aiming to investigate the relative difficulty of these two types of sounds for English speakers at different learning stages, and the source of their difficulty. Seventeen advanced learners of Mandarin (Ex group), eighteen beginning learners (InEx group), and eighteen English speakers naïve to Mandarin (Naïve group) participated in an AXB discrimination task. The stimuli were two Mandarin vowel contrasts, /li-ly/ and /lu-ly/, and two tonal contrasts, T1-T4 and T2-T3...
July 1, 2017: Language and Speech
https://www.readbyqxmd.com/read/28679275/an-investigation-of-the-systematic-use-of-spectral-information-in-the-determination-of-apparent-talker-height
#10
Santiago Barreda
The perception of apparent-talker height is mostly determined by the fundamental frequency (f0) and spectral characteristics of a voice. Although it is traditionally thought that spectral cues affect apparent-talker height by influencing apparent vocal-tract length, a recent experiment [Barreda (2016). J. Phon. 55, 1-18] suggests that apparent-talker height can vary significantly within-talker on the basis of phonemically-determined spectral variability. In this experiment, listeners were asked to estimate the height of 10 female talkers based on manipulated natural productions of bVd words containing one of /i æ ɑ u ɝ/...
June 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28678819/compensations-to-auditory-feedback-perturbations-in-congenitally-blind-and-sighted-speakers-acoustic-and-articulatory-data
#11
Pamela Trudeau-Fisette, Mark Tiede, Lucie Ménard
This study investigated the effects of visual deprivation on the relationship between speech perception and production by examining compensatory responses to real-time perturbations in auditory feedback. Specifically, acoustic and articulatory data were recorded while sighted and congenitally blind French speakers produced several repetitions of the vowel /ø/. At the acoustic level, blind speakers produced larger compensatory responses to altered vowels than their sighted peers. At the articulatory level, blind speakers also produced larger displacements of the upper lip, the tongue tip, and the tongue dorsum in compensatory responses...
2017: PloS One
https://www.readbyqxmd.com/read/28671991/effects-of-stimulus-duration-and-vowel-quality-in-cross-linguistic-categorical-perception-of-pitch-directions
#12
Si Chen, Yiqing Zhu, Ratree Wayland
We investigated categorical perception of rising and falling pitch contours by tonal and non-tonal listeners. Specifically, we determined minimum durations needed to perceive both contours and compared to those of production, how stimuli duration affects their perception, whether there is an intrinsic F0 effect, and how first language background, duration, directions of pitch and vowel quality interact with each other. Continua of fundamental frequency on different vowels with 9 duration values were created for identification and discrimination tasks...
2017: PloS One
https://www.readbyqxmd.com/read/28669914/neuromagnetic-correlates-of-voice-pitch-vowel-type-and-speaker-size-in-auditory-cortex
#13
Martin Andermann, Roy D Patterson, Carolin Vogt, Lisa Winterstetter, André Rupp
Vowel recognition is largely immune to differences in speaker size despite the waveform differences associated with variation in speaker size. This has led to the suggestion that voice pitch and mean formant frequency (MFF) are extracted early in the hierarchy of hearing/speech processing and used to normalize the internal representation of vowel sounds. This paper presents a magnetoencephalographic (MEG) experiment designed to locate and compare neuromagnetic activity associated with voice pitch, MFF and vowel type in human auditory cortex...
June 29, 2017: NeuroImage
https://www.readbyqxmd.com/read/28651255/visual-cues-contribute-differentially-to-audiovisual-perception-of-consonants-and-vowels-in-improving-recognition-and-reducing-cognitive-demands-in-listeners-with-hearing-impairment-using-hearing-aids
#14
Shahram Moradi, Björn Lidestam, Henrik Danielsson, Elaine Hoi Ning Ng, Jerker Rönnberg
Purpose: We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels-in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands-in listeners with hearing impairment using hearing aids. Method: The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss...
June 23, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/28631952/the-development-of-visual-speech-perception-in-mandarin-chinese-speaking-children
#15
Liang Chen, Jianghua Lei
The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials...
February 14, 2017: Clinical Linguistics & Phonetics
https://www.readbyqxmd.com/read/28618807/the-effect-of-presentation-level-and-stimulation-rate-on-speech-perception-and-modulation-detection-for-cochlear-implant-users
#16
Tim Brochier, Hugh J McDermott, Colette M McKay
In order to improve speech understanding for cochlear implant users, it is important to maximize the transmission of temporal information. The combined effects of stimulation rate and presentation level on temporal information transfer and speech understanding remain unclear. The present study systematically varied presentation level (60, 50, and 40 dBA) and stimulation rate [500 and 2400 pulses per second per electrode (pps)] in order to observe how the effect of rate on speech understanding changes for different presentation levels...
June 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28601721/a-universal-bias-in-adult-vowel-perception-by-ear-or-by-eye
#17
Matthew Masapollo, Linda Polka, Lucie Ménard
Speech perceivers are universally biased toward "focal" vowels (i.e., vowels whose adjacent formants are close in frequency, which concentrates acoustic energy into a narrower spectral region). This bias is demonstrated in phonetic discrimination tasks as a directional asymmetry: a change from a relatively less to a relatively more focal vowel results in significantly better performance than a change in the reverse direction. We investigated whether the critical information for this directional effect is limited to the auditory modality, or whether visible articulatory information provided by the speaker's face also plays a role...
September 2017: Cognition
https://www.readbyqxmd.com/read/28599541/prosodic-exaggeration-within-infant-directed-speech-consequences-for-vowel-learnability
#18
Frans Adriaans, Daniel Swingley
Perceptual experiments with infants show that they adapt their perception of speech sounds toward the categories of the native language. How do infants learn these categories? For the most part, acoustic analyses of natural infant-directed speech have suggested that phonetic categories are not presented to learners as separable clusters of sounds in acoustic space. As a step toward explaining how infants begin to solve this problem, the current study proposes that the exaggerated prosody characteristic of infant-directed speech may highlight for infants certain speech-sound tokens that collectively form more readily identifiable categories...
May 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28595176/individual-talker-and-token-covariation-in-the-production-of-multiple-cues-to-stop-voicing
#19
Meghan Clayards
BACKGROUND/AIMS: Previous research found that individual talkers have consistent differences in the production of segments impacting the perception of their speech by others. Speakers also produce multiple acoustic-phonetic cues to phonological contrasts. Less is known about how multiple cues covary within a phonetic category and across talkers. We examined differences in individual talkers across cues and whether token-by-token variability is a result of intrinsic factors or speaking style by examining within-category correlations...
June 9, 2017: Phonetica
https://www.readbyqxmd.com/read/28574442/electrophysiological-indices-of-audiovisual-speech-perception-in-the-broader-autism-phenotype
#20
Julia Irwin, Trey Avery, Jacqueline Turcios, Lawrence Brancazio, Barbara Cook, Nicole Landi
When a speaker talks, the consequences of this can both be heard (audio) and seen (visual). A novel visual phonemic restoration task was used to assess behavioral discrimination and neural signatures (event-related potentials, or ERP) of audiovisual processing in typically developing children with a range of social and communicative skills assessed using the social responsiveness scale, a measure of traits associated with autism. An auditory oddball design presented two types of stimuli to the listener, a clear exemplar of an auditory consonant-vowel syllable /ba/ (the more frequently occurring standard stimulus), and a syllable in which the auditory cues for the consonant were substantially weakened, creating a stimulus which is more like /a/ (the infrequently presented deviant stimulus)...
June 2, 2017: Brain Sciences
keyword
keyword
48032
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"