Read by QxMD icon Read

Vowel perception

Kenneth S Henry, Kristina S Abrams, Johanna Forst, Matthew J Mender, Erikson G Neilans, Fabio Idrobo, Laurel H Carney
Vowels make a strong contribution to speech perception under natural conditions. Vowels are encoded in the auditory nerve primarily through neural synchrony to temporal fine structure and to envelope fluctuations rather than through average discharge rate. Neural synchrony is thought to contribute less to vowel coding in central auditory nuclei, consistent with more limited synchronization to fine structure and the emergence of average-rate coding of envelope fluctuations. However, this hypothesis is largely unexplored, especially in background noise...
October 20, 2016: Journal of the Association for Research in Otolaryngology: JARO
Manuj Yadav, Densil Cabrera
OBJECTIVES: This paper aims to study the effect of room acoustics and phonemes on the perception of loudness of one's own voice (autophonic loudness) for a group of trained singers. METHODS: For a set of five phonemes, 20 singers vocalized over several autophonic loudness ratios, while maintaining pitch constancy over extreme voice levels, within five simulated rooms. RESULTS: There were statistically significant differences in the slope of the autophonic loudness function (logarithm of autophonic loudness as a function of voice sound pressure level) for the five phonemes, with slopes ranging from 1...
October 11, 2016: Journal of Voice: Official Journal of the Voice Foundation
Anna Oleszkiewicz, Katarzyna Pisanski, Kinga Lachowicz-Tabaczek, Agnieszka Sorokowska
The study of voice perception in congenitally blind individuals allows researchers rare insight into how a lifetime of visual deprivation affects the development of voice perception. Previous studies have suggested that blind adults outperform their sighted counterparts in low-level auditory tasks testing spatial localization and pitch discrimination, as well as in verbal speech processing; however, blind persons generally show no advantage in nonverbal voice recognition or discrimination tasks. The present study is the first to examine whether visual experience influences the development of social stereotypes that are formed on the basis of nonverbal vocal characteristics (i...
October 13, 2016: Psychonomic Bulletin & Review
Benjamin Munson, Sarah K Schellinger, Jan Edwards
Previous research has shown that continuous rating scales can be used to assess phonetic detail in children's productions, and could potentially be used to detect covert contrasts. Two experiments examined whether continuous rating scales have the additional benefit of being less susceptible to task-related biasing than categorical phonetic transcriptions. In both experiments, judgements of children's productions of /s/ and /θ/ were interleaved with two types of rating tasks designed to induce bias: continuous judgements of a parameter whose variation is itself relatively more continuous (gender typicality of their speech) in one biasing condition, and categorical judgements of a parameter that is relatively less continuous (the vowel they produced) in the other biasing condition...
October 13, 2016: Clinical Linguistics & Phonetics
Anna Dora Manca, Mirko Grimaldi
Speech sound perception is one of the most fascinating tasks performed by the human brain. It involves a mapping from continuous acoustic waveforms onto the discrete phonological units computed to store words in the mental lexicon. In this article, we review the magnetoencephalographic studies that have explored the timing and morphology of the N1m component to investigate how vowels and consonants are computed and represented within the auditory cortex. The neurons that are involved in the N1m act to construct a sensory memory of the stimulus due to spatially and temporally distributed activation patterns within the auditory cortex...
2016: Frontiers in Psychology
Natasha Warner, Anne Cutler
BACKGROUND/AIMS: Evidence from spoken word recognition suggests that for English listeners, distinguishing full versus reduced vowels is important, but discerning stress differences involving the same full vowel (as in mu- from music or museum) is not. In Dutch, in contrast, the latter distinction is important. This difference arises from the relative frequency of unstressed full vowels in the two vocabularies. The goal of this paper is to determine how this difference in the lexicon influences the perception of stressed versus unstressed vowels...
October 7, 2016: Phonetica
Alexandre Lehmann, Diana Jimena Arias, Marc Schönwiesner
Neurons in the auditory cortex synchronize their responses to temporal regularities in sound input. This coupling or "entrainment" is thought to facilitate beat extraction and rhythm perception in temporally structured sounds, such as music. As a consequence of such entrainment, the auditory cortex responds to an omitted (silent) sound in a regular sequence. Although previous studies suggest that the auditory brainstem frequency-following response (FFR) exhibits some of the beat-related effects found in the cortex, it is unknown whether omissions of sounds evoke a brainstem response...
September 22, 2016: Neuroscience
Wei Hu, Lin Mi, Zhen Yang, Sha Tao, Mingshuang Li, Wenjing Wang, Qi Dong, Chang Liu
Difficulties with second-language vowel perception may be related to the significant challenges in using acoustic-phonetic cues. This study investigated the effects of perception training with duration-equalized vowels on native Chinese listeners' English vowel perception and their use of acoustic-phonetic cues. Seventeen native Chinese listeners were perceptually trained with duration-equalized English vowels, and another 17 native Chinese listeners watched English videos as a control group. Both groups were tested with English vowel identification and vowel formant discrimination before training, immediately after training, and three months later...
2016: PloS One
Nadine Lavan, Sophie K Scott, Carolyn McGettigan
In 2 behavioral experiments, we explored how the extraction of identity-related information from familiar and unfamiliar voices is affected by naturally occurring vocal flexibility and variability, introduced by different types of vocalizations and levels of volitional control during production. In a first experiment, participants performed a speaker discrimination task on vowels, volitional (acted) laughter, and spontaneous (authentic) laughter from 5 unfamiliar speakers. We found that performance was significantly impaired for spontaneous laughter, a vocalization produced under reduced volitional control...
September 15, 2016: Journal of Experimental Psychology. General
Lisa McCarthy, Kirk N Olsen
Continuous increases of acoustic intensity (up-ramps) can indicate a looming (approaching) sound source in the environment, whereas continuous decreases of intensity (down-ramps) can indicate a receding sound source. From psychoacoustic experiments, an "adaptive perceptual bias" for up-ramp looming tonal stimuli has been proposed (Neuhoff, 1998). This theory postulates that (1) up-ramps are perceptually salient because of their association with looming and potentially threatening stimuli in the environment; (2) tonal stimuli are perceptually salient because of an association with single and potentially threatening biological sound sources in the environment, relative to white noise, which is more likely to arise from dispersed signals and nonthreatening/nonbiological sources (wind/ocean)...
September 8, 2016: Attention, Perception & Psychophysics
Vincent Aubanel, Chris Davis, Jeesun Kim
A growing body of evidence shows that brain oscillations track speech. This mechanism is thought to maximize processing efficiency by allocating resources to important speech information, effectively parsing speech into units of appropriate granularity for further decoding. However, some aspects of this mechanism remain unclear. First, while periodicity is an intrinsic property of this physiological mechanism, speech is only quasi-periodic, so it is not clear whether periodicity would present an advantage in processing...
2016: Frontiers in Human Neuroscience
Christian E Stilp, Paul W Anderson, Ashley A Assgari, Gregory M Ellis, Pavel Zahorik
When perceiving speech, listeners compensate for reverberation and stable spectral peaks in the speech signal. Despite natural listening conditions usually adding both reverberation and spectral coloration, these processes have only been studied separately. Reverberation smears spectral peaks across time, which is predicted to increase listeners' compensation for these peaks. This prediction was tested using sentences presented with or without a simulated reverberant sound field. All sentences had a stable spectral peak (added by amplifying frequencies matching the second formant frequency [F2] in the target vowel) before a test vowel varying from /i/ to /u/ in F2 and spectral envelope (tilt)...
September 3, 2016: Hearing Research
Amanda H Wilson, Agnès Alsius, Martin Paré, Kevin G Munhall
PURPOSE: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. METHOD: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment...
August 1, 2016: Journal of Speech, Language, and Hearing Research: JSLHR
Andrea Deme
High-pitched sung vowels may be considered phonetically "underspecified" because of (i) the tuning of the F1 to the f0 accompanying pitch raising and (ii) the wide harmonic spacing of the voice source resulting in the undersampling of the vocal tract transfer function. Therefore, sung vowel intelligibility is expected to decrease as the f0 increases. Based on the literature of speech perception, it is often suggested that sung vowels are better perceived if uttered in consonantal (CVC) context than in isolation even at high f0...
August 11, 2016: Journal of Voice: Official Journal of the Voice Foundation
Adam P Vogel, Mayumi I Wardrop, Joanne E Folker, Matthis Synofzik, Louise A Corben, Martin B Delatycki, Shaheen N Awan
BACKGROUND: Friedreich Ataxia (FRDA) is the most common hereditary ataxia, with dysarthria as one of its key clinical signs. OBJECTIVE: To describe the voice profile of individuals with FRDA to inform outcome marker development and goals of speech therapy. METHODS: Thirty-six individuals with FRDA and 30 age-matched controls provided sustained vowel and connected speech samples. Speech and voice samples were analyzed acoustically using the Analysis of Dysphonia in Speech and Voice program and perceptually using the Consensus Auditory-Perceptual Evaluation of Voice form...
August 5, 2016: Journal of Voice: Official Journal of the Voice Foundation
Jian Zhu, Yaping Chen
Relatively little attention has been paid to the perception of the three-way contrast between unaspirated affricates, aspirated affricates and fricatives in Mandarin Chinese. This study reports two experiments that explore the acoustic cues relevant to the contrast between the Mandarin retroflex series /tʂ/, /tʂ(h)/ and /ʂ/ in continuous speech. Twenty participants performed two three-alternative forced-choice tasks, in which acoustic cues including closure, frication duration (FD), aspiration, and vocalic contexts (VCs) were systematically manipulated and presented in a carrier phrase...
July 2016: Journal of the Acoustical Society of America
Robert Allan Sharpe, Elizabeth L Camposeo, Wasef K Muzaffar, Meredith A Holcomb, Judy R Dubno, Ted A Meyer
The objective of this study was to examine how age and implanted ear contribute to functional outcomes with cochlear implantation (CI). A retrospective review was performed on 96 adults who underwent unilateral CI. Older adults with right-ear implants had higher Hearing in Noise Test (HINT) scores at 1 year by 10.3% (p = 0.06). When adjusted to rationalized arcsine units (rau), right-ear HINT scores in older adults were higher by 12.1 rau (p = 0.04). Older adults had an 8.9% advantage on the right side compared to the left in post- versus preimplant scores for consonant-vowel nucleus-consonant words (p = 0...
July 23, 2016: Audiology & Neuro-otology
Cynthia P Blanco, Colin Bannard, Rajka Smiljanic
Early bilinguals often show as much sensitivity to L2-specific contrasts as monolingual speakers of the L2, but most work on cross-language speech perception has focused on isolated segments, and typically only on neighboring vowels or stop contrasts. In tasks that include sounds in context, listeners' success is more variable, so segment discrimination in isolation may not adequately represent the phonetic detail in stored representations. The current study explores the relationship between language experience and sensitivity to segmental cues in context by comparing the categorization patterns of monolingual English listeners and early and late Spanish-English bilinguals...
2016: Frontiers in Psychology
Christopher J Markiewicz, Jason W Bohland
Speech repetition relies on a series of distributed cortical representations and functional pathways. A speaker must map auditory representations of incoming sounds onto learned speech items, maintain an accurate representation of those items in short-term memory, interface that representation with the motor output system, and fluently articulate the target sequence. A "dorsal stream" consisting of posterior temporal, inferior parietal and premotor regions is thought to mediate auditory-motor representations and transformations, but the nature and activation of these representations for different portions of speech repetition tasks remains unclear...
November 1, 2016: NeuroImage
Riki Taitelbaum-Swead, Leah Fostick
OBJECTIVE: Everyday life includes fluctuating noise levels, resulting in continuously changing speech intelligibility. The study aims were: (1) to quantify the amount of decrease in age-related speech perception, as a result of increasing noise level, and (2) to test the effect of age on context usage at the word level (smaller amount of contextual cues). PATIENTS AND METHODS: A total of 24 young adults (age 20-30 years) and 20 older adults (age 60-75 years) were tested...
2016: Folia Phoniatrica et Logopaedica
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"