Read by QxMD icon Read

Speech acoustics

Ewa Jacewicz, Robert Allen Fox
We examined whether the fundamental frequency (f0) of vowels is influenced by regional variation, aiming to (1) establish how the relationship between vowel height and f0 ("intrinsic f0") is utilized in regional vowel systems and (2) determine whether regional varieties differ in their implementation of the effects of phonetic context on f0 variations. An extended set of acoustic measures explored f0 in vowels in isolated tokens (experiment 1) and in connected speech (experiment 2) from 36 women representing 3 different varieties of American English...
April 11, 2018: Phonetica
Shawn M Stevens, Andrew Redmann, Kayla Whitaker, Alyson Ruotanen, Lisa Houston, Theresa Hammer, Ravi N Samy
OBJECTIVE: Report on the safety/efficacy of a novel, carbon dioxide (CO2) laser-assisted protocol for hearing-preservation cochlear implantation (HPCI) and electric-acoustic stimulation (EAS). STUDY DESIGN: Retrospective case review. SETTING: Tertiary referral center. PATIENTS: Adult patients meeting established criteria for HPCI and EAS. INTERVENTION: Therapeutic/rehabilitative. A standardized protocol used CO2 laser to achieve meticulous hemostasis and perform cochleostomy was evaluated...
April 11, 2018: Otology & Neurotology
Matthew T Carlson
Language-specific restrictions on sound sequences in words can lead to automatic perceptual repair of illicit sound sequences. As an example, no Spanish words begin with /s/-consonant sequences ([#sC]), and where necessary (e.g., foreign loanwords) [#sC] is repaired by inserting an initial [e], (e.g. foreign loanwords, cf., esnob, from English snob). As a result, Spanish speakers tend to perceive an illusory [e] before [#sC] sequences. Interestingly, this perceptual illusion is weaker in early Spanish-English bilinguals, whose other language, English, allows [#sC]...
April 1, 2018: Language and Speech
Florian Kattner, Wolfgang Ellermeier
Task-irrelevant speech and other temporally changing sounds are known to interfere with the short-term memorization of ordered verbal materials, as compared to silence or stationary sounds. It has been argued that this disruption of short-term memory (STM) may be due to (a) interference of automatically encoded acoustical fluctuations with the process of serial rehearsal or (b) attentional capture by salient task-irrelevant information. To disentangle the contributions of these 2 processes, the authors investigated whether the disruption of serial recall is due to the semantic or acoustical properties of task-irrelevant speech (Experiment 1)...
April 9, 2018: Journal of Experimental Psychology. Human Perception and Performance
Lauren T Meaux, Kyle R Mitchell, Alex S Cohen
INTRODUCTION: Patients with schizophrenia are consistently rated by clinicians as having high levels of blunted vocal affect and alogia. However, objective technologies have often failed to substantiate these abnormalities. It could be the case that negative symptoms are context-dependent. OBJECTIVES: The present study examined speech elicited under conditions demonstrated to exacerbate thought disorder. METHODS: The Rorschach Test was administered to 36 outpatients with schizophrenia and 25 nonpatient controls...
March 27, 2018: Comprehensive Psychiatry
Aimee E Stahl, Lisa Feigenson
Although the capacity of infants' working memory is highly constrained, infants can overcome this limit via chunking; for example, they can use spatial cues to group individual objects into sets, thereby increasing memory efficiency. Here we investigated the use of abstract social knowledge as a basis for chunking. In four experiments, we asked whether 16-month-olds can use their sensitivity to distinctions between languages to efficiently chunk an array. Infants saw four identical dolls hidden in a box. Without chunking cues, infants in previous experiments fail to remember this number of items in such arrays...
April 4, 2018: Journal of Experimental Child Psychology
Joshua G W Bernstein, Olga A Stakhovskaya, Gerald I Schuchman, Kenneth K Jensen, Matthew J Goupell
Current clinical practice in programming a cochlear implant (CI) for individuals with single-sided deafness (SSD) is to maximize the transmission of speech information via the implant, with the implicit assumption that this will also result in improved spatial-hearing abilities. However, binaural sensitivity is reduced by interaural place-of-stimulation mismatch, a likely occurrence with a standard CI frequency-to-electrode allocation table (FAT). As a step toward reducing interaural mismatch, this study investigated whether a test of interaural-time-difference (ITD) discrimination could be used to estimate the acoustic frequency yielding the best place match for a given CI electrode...
January 2018: Trends in Hearing
Arne Kirkhorn Rødvik, Janne von Koss Torkildsen, Ona Bø Wie, Marit Aarvaag Storaker, Juha Tapio Silvola
Purpose: The purpose of this systematic review and meta-analysis was to establish a baseline of the vowel and consonant identification scores in prelingually and postlingually deaf users of multichannel cochlear implants (CIs) tested with consonant-vowel-consonant and vowel-consonant-vowel nonsense syllables. Method: Six electronic databases were searched for peer-reviewed articles reporting consonant and vowel identification scores in CI users measured by nonsense words...
April 4, 2018: Journal of Speech, Language, and Hearing Research: JSLHR
Yan H Yu, Valerie L Shafer, Elyse S Sussman
Speech perception behavioral research suggests that rates of sensory memory decay are dependent on stimulus properties at more than one level (e.g., acoustic level, phonemic level). The neurophysiology of sensory memory decay rate has rarely been examined in the context of speech processing. In a lexical tone study, we showed that long-term memory representation of lexical tone slows the decay rate of sensory memory for these tones. Here, we tested the hypothesis that long-term memory representation of vowels slows the rate of auditory sensory memory decay in a similar way to that of lexical tone...
2018: Frontiers in Psychology
Jiří Přibil, Anna Přibilová, Ivan Frollo
This article compares open-air and whole-body magnetic resonance imaging (MRI) equipment working with a weak magnetic field as regards the methods of its generation, spectral properties of mechanical vibration and acoustic noise produced by gradient coils during the scanning process, and the measured noise intensity. These devices are used for non-invasive MRI reconstruction of the human vocal tract during phonation with simultaneous speech recording. In this case, the vibration and noise have negative influence on quality of speech signal...
April 5, 2018: Sensors
Dimitar Spirrov, Maaike Van Eeckhoutte, Lieselot Van Deun, Tom Francart
BACKGROUND: People who use a cochlear implant together with a contralateral hearing aid-so-called bimodal listeners-have poor localisation abilities and sounds are often not balanced in loudness across ears. In order to address the latter, a loudness balancing algorithm was created, which equalises the loudness growth functions for the two ears. The algorithm uses loudness models in order to continuously adjust the two signals to loudness targets. Previous tests demonstrated improved binaural balance, improved localisation, and better speech intelligibility in quiet for soft phonemes...
2018: PloS One
Liquan Liu, Jia Hoong Ong, Alba Tuninetti, Paola Escudero
Research investigating listeners' neural sensitivity to speech sounds has largely focused on segmental features. We examined Australian English listeners' perception and learning of a supra-segmental feature, pitch direction in a non-native tonal contrast, using a passive oddball paradigm and electroencephalography. The stimuli were two contours generated from naturally produced high-level and high-falling tones in Mandarin Chinese, differing only in pitch direction (Liu and Kager, 2014). While both contours had similar pitch onsets, the pitch offset of the falling contour was lower than that of the level one...
2018: Frontiers in Psychology
Mario E Archila-Meléndez, Giancarlo Valente, Joao M Correia, Rob P W Rouhl, Vivianne H van Kranen-Mastenbroek, Bernadette M Jansma
Sensorimotor integration, the translation between acoustic signals and motoric programs, may constitute a crucial mechanism for speech. During speech perception, the acoustic-motoric translations include the recruitment of cortical areas for the representation of speech articulatory features, such as place of articulation. Selective attention can shape the processing and performance of speech perception tasks. Whether and where sensorimotor integration takes place during attentive speech perception remains to be explored...
March 2018: ENeuro
Sarah E Yoho, Eric W Healy, Carla L Youngdahl, Tyson S Barrett, Frédéric Apoux
Band-importance functions created using the compound method [Apoux and Healy (2012). J. Acoust. Soc. Am. 132, 1078-1087] provide more detail than those generated using the ANSI technique, necessitating and allowing a re-examination of the influences of speech material and talker on the shape of the band-importance function. More specifically, the detailed functions may reflect, to a larger extent, acoustic idiosyncrasies of the individual talker's voice. Twenty-one band functions were created using standard speech materials and recordings by different talkers...
March 2018: Journal of the Acoustical Society of America
Sheila Flanagan, Usha Goswami
Recent models of the neural encoding of speech suggest a core role for amplitude modulation (AM) structure, particularly regarding AM phase alignment. Accordingly, speech tasks that measure linguistic development in children may exhibit systematic properties regarding AM structure. Here, the acoustic structure of spoken items in child phonological and morphological tasks, phoneme deletion and plural elicitation, was investigated. The phase synchronisation index (PSI), reflecting the degree of phase alignment between pairs of AMs, was computed for 3 AM bands (delta, theta, beta/low gamma; 0...
March 2018: Journal of the Acoustical Society of America
Guangting Mai, Jyrki Tuomainen, Peter Howell
Speech-in-noise (SPIN) perception involves neural encoding of temporal acoustic cues. Cues include temporal fine structure (TFS) and envelopes that modulate at syllable (Slow-rate ENV) and fundamental frequency (F0 -rate ENV) rates. Here the relationship between speech-evoked neural responses to these cues and SPIN perception was investigated in older adults. Theta-band phase-locking values (PLVs) that reflect cortical sensitivity to Slow-rate ENV and peripheral/brainstem frequency-following responses phase-locked to F0 -rate ENV (FFRENV_ F 0 ) and TFS (FFRTFS ) were measured from scalp-electroencephalography responses to a repeated speech syllable in steady-state speech-shaped noise (SpN) and 16-speaker babble noise (BbN)...
March 2018: Journal of the Acoustical Society of America
Toros Ufuk Senan, Sam Jelfs, Armin Kohlrausch
The effect of irrelevant sounds on short-term memory was investigated in two experiments using noise-vocoded speech stimuli (NVSS). Speech samples were systematically modified by a noise-vocoder and a set of stimuli varying from amplitude-modulated white noise to intelligible speech was created. Eight NVSS conditions, composed of 1-, 2-, 4-, 6-, 9-, 12-, 15-, and 18-bands, were used as the distracting stimuli in a digit-recall task next to the speech and silence conditions. The results showed that performance decreased with the number of frequency bands up to the 6-bands condition, but there was no influence of number of bands on performance beyond six bands...
March 2018: Journal of the Acoustical Society of America
Colm O'Reilly, Kangkuso Analuddin, David J Kelly, Naomi Harte
Over time, a bird population's acoustic and morphological features can diverge from the parent species. A quantitative measure of difference between two populations of species/subspecies is extremely useful to zoologists. Work in this paper takes a dialect difference system first developed for speech and refines it to automatically measure vocalisation difference between bird populations by extracting pitch contours. The pitch contours are transposed into pitch codes. A variety of codebook schemes are proposed to represent the contour structure, including a vector quantization approach...
March 2018: Journal of the Acoustical Society of America
Jing Yang, Jinyu Qian, Xueqing Chen, Volker Kuehnel, Julia Rehmann, Andreas von Buol, Yulin Li, Cuncun Ren, Bo Liu, Li Xu
The present study examined the change in spectral properties of Mandarin vowels and fricatives caused by nonlinear frequency compression (NLFC) used in hearing instruments and how these changes affect the perception of speech sounds in normal-hearing listeners. Speech materials, including a list of Mandarin monosyllables in the form of /dV/ (12 vowels) and /Ca/ (five fricatives), were recorded from 20 normal-hearing, native Mandarin-speaking adults (ten males and ten females). NLFC was based on Phonak SoundRecover algorithms...
March 2018: Journal of the Acoustical Society of America
Jing Xia, Buye Xu, Shareka Pentony, Jingjing Xu, Jayaganesh Swaminathan
Many hearing-aid wearers have difficulties understanding speech in reverberant noisy environments. This study evaluated the effects of reverberation and noise on speech recognition in normal-hearing listeners and hearing-impaired listeners wearing hearing aids. Sixteen typical acoustic scenes with different amounts of reverberation and various types of noise maskers were simulated using a loudspeaker array in an anechoic chamber. Results showed that, across all listening conditions, speech intelligibility of aided hearing-impaired listeners was poorer than normal-hearing counterparts...
March 2018: Journal of the Acoustical Society of America
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"