Read by QxMD icon Read

Vowel perception

Kaori Idemaru, Peipei Wei, Lucy Gubbins
This study reports an exploratory analysis of the acoustic characteristics of second language (L2) speech which give rise to the perception of a foreign accent. Japanese speech samples were collected from American English and Mandarin Chinese speakers ( n = 16 in each group) studying Japanese. The L2 participants and native speakers ( n = 10) provided speech samples modeling after six short sentences. Segmental (vowels and stops) and prosodic features (rhythm, tone, and fluency) were examined. Native Japanese listeners ( n = 10) rated the samples with regard to degrees of foreign accent...
May 1, 2018: Language and Speech
Alan Wiinberg, Johannes Zaar, Torsten Dau
This study examined the perceptual consequences of three speech enhancement schemes based on multiband nonlinear expansion of temporal envelope fluctuations between 10 and 20 Hz: (a) "idealized" envelope expansion of the speech before the addition of stationary background noise, (b) envelope expansion of the noisy speech, and (c) envelope expansion of only those time-frequency segments of the noisy speech that exhibited signal-to-noise ratios (SNRs) above -10 dB. Linear processing was considered as a reference condition...
January 2018: Trends in Hearing
Arezoo Saffarian, Yunes Amiri Shavaki, Gholam Ali Shahidi, Zahra Jafari
BACKGROUND AND OBJECTIVES: Emotion perception plays a major role in proper communication with people in different social interactions. Nonverbal affect bursts can be used to evaluate vocal emotion perception. The present study was a preliminary step to establishing the psychometric properties of the Persian version of the Montreal Affective Voices (MAV) test, as well as to investigate the effect of Parkinson disease (PD) on vocal emotion perception. METHODS: The short, emotional sound made by pronouncing the vowel "a" in Persian was recorded by 22 actors and actresses to develop the Persian version of the MAV, the Persian Affective Voices (PAV), for emotions of happiness, sadness, pleasure, pain, anger, disgust, fear, surprise, and neutrality...
May 4, 2018: Journal of Voice: Official Journal of the Voice Foundation
Mishaela DiNino, Julie G Arenberg
Children's performance on psychoacoustic tasks improves with age, but inadequate auditory input may delay this maturation. Cochlear implant (CI) users receive a degraded auditory signal with reduced frequency resolution compared with normal, acoustic hearing; thus, immature auditory abilities may contribute to the variation among pediatric CI users' speech recognition scores. This study investigated relationships between age-related variables, spectral resolution, and vowel identification scores in prelingually deafened, early-implanted children with CIs compared with normal hearing (NH) children...
January 2018: Trends in Hearing
Merel Maslowski, Antje S Meyer, Hans Rutger Bosker
Listeners are known to track statistical regularities in speech. Yet, which temporal cues are encoded is unclear. This study tested effects of talker-specific habitual speech rate and talker-independent average speech rate (heard over a longer period of time) on the perception of the temporal Dutch vowel contrast /ɑ/-/a:/. First, Experiment 1 replicated that slow local (surrounding) speech contexts induce fewer long /a:/ responses than faster contexts. Experiment 2 tested effects of long-term habitual speech rate...
April 26, 2018: Journal of Experimental Psychology. Learning, Memory, and Cognition
François Prévost, Alexandre Lehmann
Cochlear implants restore hearing in deaf individuals, but speech perception remains challenging. Poor discrimination of spectral components is thought to account for limitations of speech recognition in cochlear implant users. We investigated how combined variations of spectral components along two orthogonal dimensions can maximize neural discrimination between two vowels, as measured by mismatch negativity. Adult cochlear implant users and matched normal-hearing listeners underwent electroencephalographic event-related potentials recordings in an optimum-1 oddball paradigm...
April 1, 2018: Clinical EEG and Neuroscience: Official Journal of the EEG and Clinical Neuroscience Society (ENCS)
Drew Weatherhead, Katherine S White
How do our expectations about speakers shape speech perception? Adults' speech perception is influenced by social properties of the speaker (e.g., race). When in development do these influences begin? In the current study, 16-month-olds heard familiar words produced in their native accent (e.g., "dog") and in an unfamiliar accent involving a vowel shift (e.g., "dag"), in the context of an image of either a same-race speaker or an other-race speaker. Infants' interpretation of the words depended on the speaker's race...
April 12, 2018: Cognition
Fatemeh Hajiaghababa, Hamid R Marateb, Saeed Kermani
BACKGROUND AND OBJECTIVE: Cochlear implants (CIs) are electronic devices restoring partial hearing to deaf individuals with profound hearing loss. In this paper, a new plug-in for traditional IIR filter-banks (FBs) is presented for cochlear implants based on wavelet neural networks (WNNs). Having provided such a plug-in for commercially available CIs, it is possible not only to use available hardware in the market but also to optimize their performance compared with the-state-of-the-art...
June 2018: Computer Methods and Programs in Biomedicine
François-Xavier Brajot, Don Nguyen, Jeffrey DiGiovanni, Vincent L Gracco
The role of somatosensory feedback in speech and the perception of loudness was assessed in adults without speech or hearing disorders. Participants completed two tasks: loudness magnitude estimation of a short vowel and oral reading of a standard passage. Both tasks were carried out in each of three conditions: no-masking, auditory masking alone, and mixed auditory masking plus vibration of the perilaryngeal area. A Lombard effect was elicited in both masking conditions: speakers unconsciously increased vocal intensity...
April 5, 2018: Experimental Brain Research. Experimentelle Hirnforschung. Expérimentation Cérébrale
Arne Kirkhorn Rødvik, Janne von Koss Torkildsen, Ona Bø Wie, Marit Aarvaag Storaker, Juha Tapio Silvola
Purpose: The purpose of this systematic review and meta-analysis was to establish a baseline of the vowel and consonant identification scores in prelingually and postlingually deaf users of multichannel cochlear implants (CIs) tested with consonant-vowel-consonant and vowel-consonant-vowel nonsense syllables. Method: Six electronic databases were searched for peer-reviewed articles reporting consonant and vowel identification scores in CI users measured by nonsense words...
April 4, 2018: Journal of Speech, Language, and Hearing Research: JSLHR
Yan H Yu, Valerie L Shafer, Elyse S Sussman
Speech perception behavioral research suggests that rates of sensory memory decay are dependent on stimulus properties at more than one level (e.g., acoustic level, phonemic level). The neurophysiology of sensory memory decay rate has rarely been examined in the context of speech processing. In a lexical tone study, we showed that long-term memory representation of lexical tone slows the decay rate of sensory memory for these tones. Here, we tested the hypothesis that long-term memory representation of vowels slows the rate of auditory sensory memory decay in a similar way to that of lexical tone...
2018: Frontiers in Psychology
Mario E Archila-Meléndez, Giancarlo Valente, Joao M Correia, Rob P W Rouhl, Vivianne H van Kranen-Mastenbroek, Bernadette M Jansma
Sensorimotor integration, the translation between acoustic signals and motoric programs, may constitute a crucial mechanism for speech. During speech perception, the acoustic-motoric translations include the recruitment of cortical areas for the representation of speech articulatory features, such as place of articulation. Selective attention can shape the processing and performance of speech perception tasks. Whether and where sensorimotor integration takes place during attentive speech perception remains to be explored...
March 2018: ENeuro
Andréa Felice Dos Santos Malerbi, Maria Valéria Schmidt Goffi-Gomez, Robinson Koji Tsuji, Marcos de Queiroz Teles Gomes, Rubens de Brito Neto, Ricardo Ferreira Bento
INTRODUCTION: An auditory brainstem implant (ABI) is an option for auditory rehabilitation in patients with totally ossified cochleae who cannot receive a conventional cochlear implant. OBJECTIVE: To evaluate the outcomes in audiometry and speech perception tests after the implantation of an ABI via the extended retrolabyrinthine approach in patients with postmeningitis hearing loss. MATERIALS AND METHODS: Ten patients, including children and adults, with postmeningitis hearing loss and bilateral totally ossified cochleae received an ABI in a tertiary center from 2009 to 2015...
April 1, 2018: Acta Oto-laryngologica
Eleanor Lawson, Jane Stuart-Smith, James M Scobbie
The cross-linguistic tendency of coda consonants to weaken, vocalize, or be deleted is shown to have a phonetic basis, resulting from gesture reduction, or variation in gesture timing. This study investigates the effects of the timing of the anterior tongue gesture for coda /r/ on acoustics and perceived strength of rhoticity, making use of two sociolects of Central Scotland (working- and middle-class) where coda /r/ is weakening and strengthening, respectively. Previous articulatory analysis revealed a strong tendency for these sociolects to use different coda /r/ tongue configurations-working- and middle-class speakers tend to use tip/front raised and bunched variants, respectively; however, this finding does not explain working-class /r/ weakening...
March 2018: Journal of the Acoustical Society of America
Megan J Crowhurst
Lengthening and creaky voice are associated with prosodic finality in English. Listeners can use lengthening to identify both utterance-internal and final prosodic phrase boundaries and can use creak to locate utterance endings. Less is known about listeners' use of creak to locate internal prosodic boundaries and the relative importance assigned to duration and creak when both are present. Participants in two experiments segmented structurally ambiguous sentences in which duration and creak were manipulated to signal prosodic boundaries...
March 2018: Journal of the Acoustical Society of America
Jing Yang, Jinyu Qian, Xueqing Chen, Volker Kuehnel, Julia Rehmann, Andreas von Buol, Yulin Li, Cuncun Ren, Bo Liu, Li Xu
The present study examined the change in spectral properties of Mandarin vowels and fricatives caused by nonlinear frequency compression (NLFC) used in hearing instruments and how these changes affect the perception of speech sounds in normal-hearing listeners. Speech materials, including a list of Mandarin monosyllables in the form of /dV/ (12 vowels) and /Ca/ (five fricatives), were recorded from 20 normal-hearing, native Mandarin-speaking adults (ten males and ten females). NLFC was based on Phonak SoundRecover algorithms...
March 2018: Journal of the Acoustical Society of America
Maki Sakamoto, Junji Watanabe
Several studies have shown cross-modal associations between sounds and vision or gustation by asking participants to match pre-defined sound-symbolic words (SSWs), such as "bouba" or "kiki," with visual or gustatory materials. Here, we conducted an explorative study on cross-modal associations of tactile sensations using spontaneous production of Japanese SSWs and semantic ratings. The Japanese language was selected, because it has a large number of SSWs that can represent a wide range of tactile perceptual spaces with fine resolution, and it shows strong associations between sound and touch...
2018: Frontiers in Psychology
Payam Ghaffarvand Mokari, Stefan Werner
This study investigated the role of different cognitive abilities-inhibitory control, attention control, phonological short-term memory (PSTM), and acoustic short-term memory (AM)-in second language (L2) vowel learning. The participants were 40 Azerbaijani learners of Standard Southern British English. Their perception of L2 vowels was tested through a perceptual discrimination task before and after five sessions of high-variability phonetic training. Inhibitory control was significantly correlated with gains from training in the discrimination of L2 vowel pairs...
March 1, 2018: Language and Speech
Kevin A Peng, Mark B Lorenz, Steven R Otto, Derald E Brackmann, Eric P Wilkinson
OBJECTIVES/HYPOTHESIS: To report a series of patients with neurofibromatosis type 2 (NF2), where each patient underwent both cochlear implantation and auditory brainstem implantation for hearing rehabilitation, and to discuss factors influencing respective implant success. STUDY DESIGN: Retrospective case series. METHODS: Ten NF2 patients with both cochlear implantations and auditory brainstem implantations were retrospectively reviewed. Speech testing for auditory brainstem implants (ABIs) and cochlear implants (CIs) was performed separately...
March 24, 2018: Laryngoscope
Ariel Tankus, Itzhak Fried
BACKGROUND: Most of the patients with Parkinson's disease suffer from speech disorders characterized mainly by dysarthria and hypophonia. OBJECTIVE: To understand the deterioration of speech in the course of Parkinson's disease. METHODS: We intraoperatively recorded single neuron activity in the subthalamic nucleus of 18 neurosurgical patients with Parkinson's disease undergoing implantation of deep brain stimulator while patients articulated 5 vowel sounds...
March 15, 2018: Neurosurgery
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"