Read by QxMD icon Read

Speech acoustics

Jonathan L McJunkin, Nedim Durakovic, Jacques Herzog, Craig A Buchman
OBJECTIVE: To describe outcomes from cochlear implantation with a new, slim modiolar electrode array. STUDY DESIGN: Retrospective cohort study. SETTING: Tertiary referral centers. PATIENTS: Adult cochlear implant candidates. INTERVENTIONS: Cochlear implantation with CI532 (Cochlear Corp). MAIN OUTCOME MEASURES: Pre- and postoperative speech perception scores, operative details, and postoperative computed tomography (CT) reconstructions of array location...
January 2018: Otology & Neurotology
Edward J Golob, Jörg Lewald, Stephan Getzmann, Jeffrey R Mock
Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1-9)...
December 8, 2017: Scientific Reports
Hyunsoon Kim, Shinji Maeda, Kiyoshi Honda, Lise Crevier-Buchman
This paper aims to refine our understanding of the speech mechanism and laryngeal features involved in the Korean lenis (/p t k/), aspirated (/ph th kh/), and fortis (/p' t' k'/) plosives. For this purpose we made measurements using a new noninvasive technique called external lighting and sensing photoglottography (ePGG) as well as intra-oral air pressure (Pio) above the glottis, airflow, and acoustic data. From simultaneous recordings of the experimental data, we were ableto quantify the laryngeal-oral coordination of glottal opening and a consonant release, and the covariance of airflow peak and duration of aspiration with glottal opening...
December 5, 2017: Phonetica
Daniel R Lametti, Harriet J Smith, Phoebe Freidin, Kate E Watkins
The motor cortex and cerebellum are thought to be critical for learning and maintaining motor behaviors. Here we use transcranial direct current stimulation (tDCS) to test the role of the motor cortex and cerebellum in sensorimotor learning in speech. During productions of "head," "bed," and "dead," the first formant of the vowel sound was altered in real time toward the first formant of the vowel sound in "had," "bad," and "dad." Compensatory changes in first and second formant production were used as a measure of motor adaptation...
December 6, 2017: Journal of Cognitive Neuroscience
Dirk Oetting, Volker Hohmann, Jens-E Appell, Birger Kollmeier, Stephan D Ewert
OBJECTIVES: Normalizing perceived loudness is an important rationale for gain adjustments in hearing aids. It has been demonstrated that gains required for restoring normal loudness perception for monaural narrowband signals can lead to higher-than-normal loudness in listeners with hearing loss, particularly for binaural broadband presentation. The present study presents a binaural bandwidth-adaptive dynamic compressor (BBDC) that can apply different gains for narrow- and broadband signals...
November 27, 2017: Ear and Hearing
Mahmoud E Elbashti, Yuka I Sumita, Mariko Hattori, Amel M Aswehlee, Hisashi Taniguchi
PURPOSE: Accurate evaluation of speech characteristics through formant frequency measurement is important for proper speech rehabilitation in patients after maxillectomy. This study aimed to evaluate the utility of digital acoustic analysis and vowel pentagon space for the prediction of speech ability after maxillectomy, by comparing the acoustic characteristics of vowel articulation in three classes of maxillectomy defects. MATERIALS AND METHODS: Aramany's classifications I, II, and IV were used to group 27 male patients after maxillectomy...
December 6, 2017: Journal of Prosthodontics: Official Journal of the American College of Prosthodontists
Yuka I Sumita, Mariko Hattori, Mai Murase, Mahmoud E Elbashti, Hisashi Taniguchi
BACKGROUND: Among the functional disabilities that patients face following maxillectomy, speech impairment is a major factor influencing quality of life. Proper rehabilitation of speech, which may include prosthodontic and surgical treatment and speech therapy, requires accurate evaluation of speech intelligibility (SI). A simple, less time-consuming yet accurate evaluation is desirable both for maxillectomy patients and the various clinicians providing maxillofacial treatment. OBJECTIVE: This study sought to determine the utility of digital acoustic analysis of vowels for the prediction of SI in maxillectomy patients, based on a comprehensive understanding of speech production in the vocal tract of maxillectomy patients and its perception...
December 4, 2017: Journal of Oral Rehabilitation
Anna Rzepakowska, Ewelina Sielska-Badurek, Raul Cruz, Maria Sobol, Ewa Osuch-Wójcikiewicz, Kazimierz Niemczyk
The aim of the study was comparison of voice and life quality after microdirect laryngoscopy in three patient histopathological categories: benign, precancerous, and malignant glottic lesions. A totalnof 137 patients treated with microdirect laryngoscopy were included in the study. Each patient was evaluated with a multidimensional protocol before and 3, 6, and 12 months after treatment. Final 1-year evaluations were achieved in 74.5% (102). The assessment included laryngovideostroboscopy (LVS), perceptual (GRBAS) grading, aerodynamic measures including maximum phonation time and phonation quotient and acoustic measurements (Kay Elemetrics Multi-Speech program), Voice Handicap Index (VHI), Voice-Related Quality of Life questionnaire; and World Health Organization Quality of Life Scale-Brief Version (WHOQoL-BREF)...
November 30, 2017: Journal of Voice: Official Journal of the Voice Foundation
Johannes Zaar, Nicola Schmitt, Ralph-Peter Derleth, Mishaela DiNino, Julie G Arenberg, Torsten Dau
This study investigated the influence of hearing-aid (HA) and cochlear-implant (CI) processing on consonant perception in normal-hearing (NH) listeners. Measured data were compared to predictions obtained with a speech perception model [Zaar and Dau (2017). J. Acoust. Soc. Am. 141, 1051-1064] that combines an auditory processing front end with a correlation-based template-matching back end. In terms of HA processing, effects of strong nonlinear frequency compression and impulse-noise suppression were measured in 10 NH listeners using consonant-vowel stimuli...
November 2017: Journal of the Acoustical Society of America
Ivy Hauser
Dispersion Theory [DT; Liljencrants and Lindblom (1972). Language 12(1), 839-862] claims that acoustically dispersed vowel inventories should be typologically common. Dispersion is often quantified using triangle area between three mean vowel formant points. This approach is problematic; it ignores distributions, which affect speech perception [Clayards, Tanenhaus, Aslin, and Jacobs (2008). Cognition 108, 804-809]. This letter proposes a revised metric for calculating dispersion which incorporates covariance...
November 2017: Journal of the Acoustical Society of America
Ping Tang, Nan Xu Rattanasone, Ivan Yuen, Katherine Demuth
Mandarin lexical tones are modified in both infant-directed speech (IDS) and Lombard speech, resulting in tone hyperarticulation. However, it is unclear if these registers also alter contextual tones (neutral tone and tone sandhi) and if such phonetic modification might affect acquisition of these tones. This study therefore examined how neutral tone and tone sandhi are realized in IDS, and how their acoustic manifestations compare with those in Lombard speech, where the communicative needs of listeners differ...
November 2017: Journal of the Acoustical Society of America
Hao Huang, Haihua Xu, Ying Hu, Gang Zhou
Goodness of pronunciation (GOP) is the most widely used method for automatic mispronunciation detection. In this paper, a transfer learning approach to GOP based mispronunciation detection when applying maximum F1-score criterion (MFC) training to deep neural network (DNN)-hidden Markov model based acoustic models is proposed. Rather than train the whole network using MFC, a DNN is used, whose hidden layers are borrowed from native speech recognition with only the softmax layer trained according to the MFC objective function...
November 2017: Journal of the Acoustical Society of America
Keiko Ishikawa, Joel MacAuslan, Suzanne Boyce
The goal of clinical speech analysis is to describe abnormalities in speech production that affect a speaker's intelligibility. Landmark analysis identifies abrupt changes in a speech signal and classifies them according to their acoustic profiles. These acoustic markers, called landmarks, may help describe intelligibility deficits in disordered speech. As a first step toward clinical application of landmark analysis, the present study describes expression of landmarks in normal speech. Results of the study revealed that syllabic, glottal, and burst landmarks consist of 94% of all landmarks, and suggest the effect of gender needs to be considered for the analysis...
November 2017: Journal of the Acoustical Society of America
Hui Hong, Ting Lu, Xiaoyu Wang, Yuan Wang, Jason Tait Sanchez
Auditory brainstem neurons are functionally primed to fire action potentials (APs) at markedly high-rates in order to rapidly encode acoustic information of sound. This specialization is critical for survival and the comprehension of behaviourally relevant communication functions, including sound localization and understanding speech in noise. Here, we investigated underlying ion channel mechanisms essential for high-rate AP firing in neurons of the chicken nucleus magnocellularis (NM) - the avian analog of bushy cells of the mammalian anteroventral cochlear nucleus...
November 28, 2017: Journal of Physiology
Yaneri A Ayala, Alexandre Lehmann, Hugo Merchant
The extraction and encoding of acoustical temporal regularities are fundamental for human cognitive auditory abilities such as speech or beat entrainment. Because the comparison of the neural sensitivity to temporal regularities between human and animals is fundamental to relate non-invasive measures of auditory processing to their neuronal basis, here we compared the neural representation of auditory periodicities between human and non-human primates by measuring scalp-recorded frequency-following response (FFR)...
November 30, 2017: Scientific Reports
Minu George Thoppil, C Santhosh Kumar, Anand Kumar, John Amose
Background: Dysarthria refers to a group of disorders resulting from disturbances in muscular control over the speech mechanism due to damage of central or peripheral nervous system. There is wide subjective variability in assessment of dysarthria between different clinicians. In our study, we tried to identify a pattern among types of dysarthria by acoustic analysis and to prevent intersubject variability. Objectives: (1) Pattern recognition among types of dysarthria with software tool and to compare with normal subjects...
October 2017: Annals of Indian Academy of Neurology
Alexandra Basilakos, Grigori Yourganov, Dirk-Bart den Ouden, Daniel Fogerty, Chris Rorden, Lynda Feenaughty, Julius Fridriksson
Purpose: Apraxia of speech (AOS) is a consequence of stroke that frequently co-occurs with aphasia. Its study is limited by difficulties with its perceptual evaluation and dissociation from co-occurring impairments. This study examined the classification accuracy of several acoustic measures for the differential diagnosis of AOS in a sample of stroke survivors. Method: Fifty-seven individuals were included (mean age = 60.8 ± 10.4 years; 21 women, 36 men; mean months poststroke = 54...
November 27, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
Yue Dong, Kaan E Raif, Sarah C Determan, Yan Gai
Decoding spatial attention based on brain signals has wide applications in brain-computer interface (BCI). Previous BCI systems mostly relied on visual patterns or auditory stimulation (e.g., loudspeakers) to evoke synchronous brain signals. There would be difficulties to cover a large range of spatial locations with such a stimulation protocol. The present study explored the possibility of using virtual acoustic space and a visual-auditory matching paradigm to overcome this issue. The technique has the flexibility of generating sound stimulation from virtually any spatial location...
November 2017: Physiological Reports
(no author information available yet)
Biomechanical models of the oropharynx are beneficial to treatment planning of speech impediments by providing valuable insight into the speech function such as motor control. In this paper, we develop a subject-specific model of the oropharynx and investigate its utility in speech production. Our approach adapts a generic tongue-jaw-hyoid model (Stavness et al. 2011) to fit and track dynamic volumetric MRI data of a normal speaker, subsequently coupled to a source-filter based acoustic synthesizer. We demonstrate our model's ability to track tongue tissue motion, simulate plausible muscle activation patterns, as well as generate acoustic results that have comparable spectral features to the associated recorded audio...
2017: Computer Methods in Biomechanics and Biomedical Engineering. Imaging & Visualization
Yi Lin, Ruolin Fan, Lei Mo
The scientific community has been divided as to the origin of individual differences in perceiving the sounds of a second language (L2). There are two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. A previous study showed that such individual variability is linked to the perceivers' speech-specific capabilities, rather than the perceivers' psychoacoustic abilities. However, we assume that the selection of participants and parameters of sound stimuli might not appropriate...
2017: PloS One
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"