Read by QxMD icon Read

Speech acoustics

Josef Seebacher, Franz Muigg, Natalie Fischer, Viktor Weichbold, Kurt Stephan, Patrick Zorowka, Harald R Bliem, Joachim Schmutzhard
OBJECTIVE: To study the long-term evolution of speech and intelligence in a child with partial deafness and normal hearing in the low frequencies after sequentially receiving cochlear implants in both ears. DESIGN: Retrospective chart review. STUDY SAMPLE: Male child aged 6 years was followed over a time period of four years. RESULTS: The paediatric patient had normal hearing up to 1 kHz and profound hearing loss at all higher frequencies symmetrical in both ears...
October 12, 2017: International Journal of Audiology
Antonio Elia Forte, Octave Etard, Tobias Reichenbach
Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation...
October 10, 2017: ELife
M Hey, T Hocke, P Ambrosch
BACKGROUND: As part of postoperative cochlear implant (CI) diagnostics, speech comprehension tests are performed to monitor audiological outcome. In recent years, a trend toward improved suprathreshold speech intelligibility in quiet and an extension of intelligibility to softer sounds has been observed. Parallel to audiometric data, analysis of the patients' acoustic environment can take place by means of data logging in modern CI systems. OBJECTIVES: Which test levels reflect the individual listening environment in a relevant manner and how can these be reflected in a clinical audiometric setting? PATIENTS AND METHODS: In a retrospective analysis, data logs of 263 adult CI patients were evaluated for sound level and the listening situation (quiet, speech in quiet, noise, speech in noise, music, and wind)...
October 6, 2017: HNO
Evaldas Vaiciukynas, Antanas Verikas, Adas Gelzinis, Marija Bacauskiene
This study investigates signals from sustained phonation and text-dependent speech modalities for Parkinson's disease screening. Phonation corresponds to the vowel /a/ voicing task and speech to the pronunciation of a short sentence in Lithuanian language. Signals were recorded through two channels simultaneously, namely, acoustic cardioid (AC) and smart phone (SP) microphones. Additional modalities were obtained by splitting speech recording into voiced and unvoiced parts. Information in each modality is summarized by 18 well-known audio feature sets...
2017: PloS One
Yong Tae Hong, Min Ju Park, Ki Hwan Hong
PURPOSE: Laser cordectomy (LC) or radiotherapy (RT) is often recommended in the early stage of laryngeal cancer. We conducted perceptual and acoustic analysis to compare sustained vowel and stop consonants since there is no article evaluating both the sustained vowel and stop consonants. Eventually, we might determine which management is superior in terms of speech production. SUBJECTS AND METHODS: A total of 28 patients who underwent LC and RT for early T1 glottic cancer were selected...
October 4, 2017: Logopedics, Phoniatrics, Vocology
Bastien Intartaglia, Travis White-Schwoch, Nina Kraus, Daniele Schön
Growing evidence shows that music and language experience affect the neural processing of speech sounds throughout the auditory system. Recent work mainly focused on the benefits induced by musical practice on the processing of native language or tonal foreign language, which rely on pitch processing. The aim of the present study was to take this research a step further by investigating the effect of music training on processing English sounds by foreign listeners. We recorded subcortical electrophysiological responses to an English syllable in three groups of participants: native speakers, non-native nonmusicians, and non-native musicians...
October 3, 2017: Scientific Reports
Teresa Y C Ching, Vicky W Zhang, Earl E Johnson, Patricia Van Buynder, Sanna Hou, Lauren Burns, Laura Button, Christopher Flynn, Karen McGhie
OBJECTIVE: This study examined the influence of prescription on hearing aid (HA) fitting characteristics and 5-year developmental outcomes of children. DESIGN: A randomised controlled trial implemented as part of a population-based study on Longitudinal Outcomes of Children with Hearing Impairment (LOCHI). STUDY SAMPLE: Two-hundred and thirty-two children that were fit according to either the National Acoustic Laboratories (NAL) or Desired Sensation Level (DSL) prescription...
October 3, 2017: International Journal of Audiology
Qian-Jie Fu, John J Galvin, Xiaosong Wang
Advances in cochlear implant (CI) technology allow for acoustic and electric hearing to be combined within the same ear (electric-acoustic stimulation, or EAS) and/or across ears (bimodal listening). Integration efficiency (IE; the ratio between observed and predicted performance for acoustic-electric hearing) can be used to estimate how well acoustic and electric hearing are combined. The goal of this study was to evaluate factors that affect IE in EAS and bimodal listening. Vowel recognition was measured in normal-hearing subjects listening to simulations of unimodal, EAS, and bimodal listening...
October 2, 2017: Scientific Reports
Gangyi Feng, Zhenzhong Gan, Suiping Wang, Patrick C M Wong, Bharath Chandrasekaran
A significant neural challenge in speech perception includes extracting discrete phonetic categories from continuous and multidimensional signals despite varying task demands and surface-acoustic variability. While neural representations of speech categories have been previously identified in frontal and posterior temporal-parietal regions, the task dependency and dimensional specificity of these neural representations are still unclear. Here, we asked native Mandarin participants to listen to speech syllables carrying 4 distinct lexical tone categories across passive listening, repetition, and categorization tasks while they underwent functional magnetic resonance imaging (fMRI)...
August 28, 2017: Cerebral Cortex
Antoine J Shahin, Stanley Shen, Jess R Kerlin
We examined the relationship between tolerance for audiovisual onset asynchrony (AVOA) and the spectrotemporal fidelity of the spoken words and the speaker's mouth movements. In two experiments that only varied in the temporal order of sensory modality, visual speech leading (exp1) or lagging (exp2) acoustic speech, participants watched intact and blurred videos of a speaker uttering trisyllabic words and nonwords that were noise vocoded with 4-, 8-, 16-, and 32-channels. They judged whether the speaker's mouth movements and the speech sounds were in-sync or out-of-sync...
2017: Language, Cognition and Neuroscience
Erin Conwell
Many approaches to early word learning posit that children assume a one-to-one mapping of form and meaning. However, children's early vocabularies contain homophones, words that violate that assumption. Children might learn such words by exploiting prosodic differences between homophone meanings that are associated with lemma frequency (Gahl, 2008). Such differences have not yet been documented in children's natural language experience and the exaggerated prosody of child-directed speech could either mask the subtle distinctions reported in adult-directed speech or enhance them...
2017: Language Learning and Development
Karine Schwarz, Anna Martha Vaitses Fontanari, Angelo Brandelli Costa, Bianca Machado Borba Soll, Dhiordan Cardoso da Silva, Anna Paula de Sá Villas-Bôas, Carla Aparecida Cielo, Gabriele Rodrigues Bastilha, Vanessa Veis Ribeiro, Maria Elza Kazumi Yamaguti Dorfmann, Maria Inês Rodrigues Lobato
Voice is an important gender marker in the transition process as a transgender individual accepts a new gender identity. The objectives of this study were to describe and relate aspects of a perceptual-auditory analysis and the fundamental frequency (F0) of male-to-female (MtF) transsexual individuals. A case-control study was carried out with individuals aged 19-52 years who attended the Gender Identity Program of the Hospital de Clínicas of Porto Alegre. Vocal recordings from the MtF transgender and cisgender individuals (vowel /a:/ and six phrases of Consensus Auditory Perceptual Evaluation Voice [CAPE-V]) were edited and randomly coded before storage in a Dropbox folder...
September 28, 2017: Journal of Voice: Official Journal of the Voice Foundation
Christine Turgeon, Pamela Trudeau-Fisette, Elizabeth Fitzpatrick, Lucie Ménard
In child cochlear implant (CI) users, early implantation generally results in highly intelligible speech. However, for some children developing a high level of speech intelligibility may be problematic. Studies of speech production in CI users have principally been based on perceptual judgment and acoustic measures. Articulatory measures, such as those collected using ultrasound provide the opportunity to more precisely evaluate what makes child CI users more intelligible. This study investigates speech production and intelligibility in children with CI using acoustic and articulatory measures...
October 2017: International Journal of Pediatric Otorhinolaryngology
Letizia Guerzoni, Domenico Cuda
OBJECTIVE: To analyse the value of listening-data logged in the speech processor on the prediction of the early auditory and linguistic skills in children who received a cochlear implant in their first 2 years of life. STUDY DESIGN: Prospective observational non-randomized study. METHODS: Ten children with profound congenital sensorineural hearing loss were included in the study. The mean age at CI activation was 16.9 months (SD ± 7.2; range 10-24)...
October 2017: International Journal of Pediatric Otorhinolaryngology
Joshua J Green, Inge-Marie Eigsti
Emotional states can be conveyed by vocal cues such as pitch and intensity. Despite the ubiquity of cellular telephones, there is limited information on how vocal emotional states are perceived during cell-phone transmissions. Emotional utterances (neutral, happy, angry) were elicited from two female talkers and simultaneously recorded via microphone and cell-phone. Ten-step continua (neutral to happy, neutral to angry) were generated using the straight algorithm. Analyses compared reaction time (RT) and emotion judgment as a function of recording type (microphone vs cell-phone)...
September 2017: Journal of the Acoustical Society of America
Michael R Wirtzfeld, Nazanin Pourmand, Vijay Parsa, Ian C Bruce
Objective measures are commonly used in the development of speech coding algorithms as an adjunct to human subjective evaluation. Predictors of speech quality based on models of physiological or perceptual processing tend to perform better than measures based on simple acoustical properties. Here, a modeling method based on a detailed physiological model and a neurogram similarity measure is developed and optimized to predict the quality of an enhanced wideband speech dataset. A model capturing temporal modulations in neural activity up to 267 Hz was found to perform as well as or better than several existing objective quality measures...
September 2017: Journal of the Acoustical Society of America
Michelle A Parker, Stephanie A Borrie
OBJECTIVE: Vocal fry is a prevalent speech feature in college-aged American women living in the United States. However, there is currently little consensus about how its use influences listener judgments of the speaker. This study investigated how vocal fry influences judgments of intelligence and the likability of young adult female speakers of American English while taking into account the surrounding acoustic-prosodic context, specifically voice pitch and speech rate. METHOD: Speech samples were obtained from eight American English-speaking females who presented with different combinations of voice pitch (low or high), speech rate (slow or fast), and vocal fry (presence or absence)...
September 25, 2017: Journal of Voice: Official Journal of the Voice Foundation
Katrin Neumann, Harald A Euler, Malte Kob, Alexander Wolff von Gudenberg, Anne-Lise Giraud, Tobias Weissgerber, Christian A Kell
PURPOSE: Speech in persons who stutter (PWS) is associated with disturbed prosody (speech melody and intonation), which may impact communication. The neural correlates of PWS' altered prosody during speaking are not known, neither is how a speech-restructuring therapy affects prosody at both a behavioral and a cerebral level. METHODS: In this fMRI study, we explored group differences in brain activation associated with the production of different kinds of prosody in 13 male adults who stutter (AWS) before, directly after, and at least 1 year after an effective intensive fluency-shaping treatment, in 13 typically fluent-speaking control participants (CP), and in 13 males who had spontaneously recovered from stuttering during adulthood (RAWS), while sentences were read aloud with 'neutral', instructed emotional (happy), and linguistically driven (questioning) prosody...
September 9, 2017: Journal of Fluency Disorders
Maria Hakonen, Patrick J C May, Iiro P Jääskeläinen, Emma Jokinen, Mikko Sams, Hannu Tiitinen
INTRODUCTION: We examined which brain areas are involved in the comprehension of acoustically distorted speech using an experimental paradigm where the same distorted sentence can be perceived at different levels of intelligibility. This change in intelligibility occurs via a single intervening presentation of the intact version of the sentence, and the effect lasts at least on the order of minutes. Since the acoustic structure of the distorted stimulus is kept fixed and only intelligibility is varied, this allows one to study brain activity related to speech comprehension specifically...
September 2017: Brain and Behavior
Luciano Mastronardi, Guglielmo Cacciotti, Raffaellino Roperto, Ettore DI Scipio
BACKGROUNDS: Goals of vestibular schwannoma (VS) microsurgery are: maximal resection, facial nerve (FN) preservation and, in selected cases, hearing preservation (HP). Postoperative HP rates are related to clinical and radiographic factors: size of tumor, preoperative hearing, hypertension, diabetes, and presence or absence of preoperative tinnitus. In this retrospective review we evaluated the influence of preoperative tinnitus on HP after VS surgery in patients with preoperative socially useful hearing (SUH)...
September 22, 2017: Journal of Neurosurgical Sciences
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"