keyword
MENU ▼
Read by QxMD icon Read
search

Speech acoustics

keyword
https://www.readbyqxmd.com/read/28722648/perspectives-on-the-pure-tone-audiogram
#1
REVIEW
Frank E Musiek, Jennifer Shinn, Gail D Chermak, Doris-Eva Bamiou
BACKGROUND: The pure-tone audiogram, though fundamental to audiology, presents limitations, especially in the case of central auditory involvement. Advances in auditory neuroscience underscore the considerably larger role of the central auditory nervous system (CANS) in hearing and related disorders. Given the availability of behavioral audiological tests and electrophysiological procedures that can provide better insights as to the function of the various components of the auditory system, this perspective piece reviews the limitations of the pure-tone audiogram and notes some of the advantages of other tests and procedures used in tandem with the pure-tone threshold measurement...
July 2017: Journal of the American Academy of Audiology
https://www.readbyqxmd.com/read/28717151/what-drives-sound-symbolism-different-acoustic-cues-underlie-sound-size-and-sound-shape-mappings
#2
Klemens Knoeferle, Jixing Li, Emanuela Maggioni, Charles Spence
Sound symbolism refers to the non-arbitrary mappings that exist between phonetic properties of speech sounds and their meaning. Despite there being an extensive literature on the topic, the acoustic features and psychological mechanisms that give rise to sound symbolism are not, as yet, altogether clear. The present study was designed to investigate whether different sets of acoustic cues predict size and shape symbolism, respectively. In two experiments, participants judged whether a given consonant-vowel speech sound was large or small, round or angular, using a size or shape scale...
July 17, 2017: Scientific Reports
https://www.readbyqxmd.com/read/28716965/neural-tuning-to-low-level-features-of-speech-throughout-the-perisylvian-cortex
#3
Julia Berezutskaya, Zachary V Freudenburg, Umut Güçlü, Marcel A J van Gerven, Nick F Ramsey
Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus towards anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study we investigate what happens to these neural representations past the superior temporal gyrus, and how they engage higher-level language processing areas, such as inferior frontal gyrus...
July 17, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/28715718/when-speaker-identity-is-unavoidable-neural-processing-of-speaker-identity-cues-in-natural-speech
#4
Alba Tuninetti, Kateřina Chládková, Varghese Peter, Niels O Schiller, Paola Escudero
Speech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent differences, in order to comprehend the linguistic message, e.g. to correctly identify a speech sound or a word. Here we tested whether the process of normalizing speaker and accent differences, facilitating the recognition of linguistic information, is found at the level of neural processing, and whether it is modulated by the listeners' native language...
July 14, 2017: Brain and Language
https://www.readbyqxmd.com/read/28714530/acoustic-changes-in-the-speech-of-children-with-cerebral-palsy-following-an-intensive-program-of-dysarthria-therapy
#5
Lindsay Pennington, Eftychia Lombardo, Nick Steen, Nick Miller
BACKGROUND: The speech intelligibility of children with dysarthria and cerebral palsy has been observed to increase following therapy focusing on respiration and phonation. AIMS: To determine if speech intelligibility change following intervention is associated with change in acoustic measures of voice. METHODS & PROCEDURES: We recorded 16 young people with cerebral palsy and dysarthria (nine girls; mean age 14 years, SD = 2; nine spastic type, two dyskinetic, four mixed; one Worster-Drought) producing speech in two conditions (single words, connected speech) twice before and twice after therapy focusing on respiration, phonation and rate...
July 17, 2017: International Journal of Language & Communication Disorders
https://www.readbyqxmd.com/read/28712601/the-effects-of-uvulopalatal-flap-operation-on-speech-nasalance-and-the-acoustic-parameters-of-the-final-nasal-consonants
#6
Soo Kyoung Park, Yong Soo Lee, Young Ae Kang, Jun Xu, Ki Sang Rha, Yong Min Kim
OBJECTIVE: The acoustic characteristics of voice are determined by the source of the sound and shape of the vocal tract. Various anatomical changes after uvulopalatal flap (UPF) operation can change nasalance and/or other voice characteristics. Our aim was to explore the possible effects of UPF creation on speech nasalance and the resonatory features of the final nasal consonants, and thus voice characteristics. METHODS: A total of 30 patients (26 males, 4 females) with obstructive sleep apnea who underwent UPF operation were recruited...
July 13, 2017: Auris, Nasus, Larynx
https://www.readbyqxmd.com/read/28712469/subcortical-contributions-to-motor-speech-phylogenetic-developmental-clinical
#7
REVIEW
W Ziegler, H Ackermann
Vocal learning is an exclusively human trait among primates. However, songbirds demonstrate behavioral features resembling human speech learning. Two circuits have a preeminent role in this human behavior; namely, the corticostriatal and the cerebrocerebellar motor loops. While the striatal contribution can be traced back to the avian anterior forebrain pathway (AFP), the sensorimotor adaptation functions of the cerebellum appear to be human specific in acoustic communication. This review contributes to an ongoing discussion on how birdsong translates into human speech...
July 13, 2017: Trends in Neurosciences
https://www.readbyqxmd.com/read/28706081/brief-stimulus-exposure-fully-remediates-temporal-processing-deficits-induced-by-early-hearing-loss
#8
David B Green, Michelle M Mattingly, Yi Ye, Jennifer D Gay, Merri J Rosen
In childhood, partial hearing loss can produce prolonged deficits in speech perception and temporal processing. Early therapeutic interventions targeting temporal processing may improve later speech-related outcomes, however. Gap detection is a measure of auditory temporal resolution that relies on auditory cortex (ACx), and early auditory deprivation alters intrinsic and synaptic properties in ACx. Thus, early deprivation should induce deficits in gap detection, which should be reflected in ACx gap sensitivity...
July 13, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/28695624/life-quality-improvement-in-hoarse-patients-with-early-glottic-cancer-after-transoral-laser-microsurgery
#9
Li-Jen Hsin, Wan-Ni Lin, Tuan-Jen Fang, Li-Ang Lee, Chung-Jan Kang, Bing-Shan Huang, Chien-Yu Lin, Kang-Hsing Fan, Ngan-Ming Tsang, Cheng-Lung Hsu, Joseph Tung-Chieh Chang, Chun-Ta Liao, Tzu-Chen Yen, Kai-Ping Chang, Hsiu-Feng Chuang, Hsueh-Yu Li
BACKGROUND: The purpose of this study was to evaluate the recovery kinetics of voice and quality of life (QOL) over time in patients with early glottic cancer who underwent transoral laser microsurgery (TLM). METHODS: A prospective cohort study was conducted in which acoustic and aerodynamic voice assessments and QOL analyses were done using health-related questionnaires (European Organization for Research and Treatment of Cancer Quality of Life Questionnaire-Core 30-questions [EORTC-QLQ-C30] and European Organization for Research and Treatment of Cancer Quality of Life Questionnaire-Head and Neck 35-questions [EORTC-QLQ-H&N35]) were administered at designated times...
July 11, 2017: Head & Neck
https://www.readbyqxmd.com/read/28692932/axon-guidance-pathways-served-as-common-targets-for-human-speech-language-evolution-and-related-disorders
#10
Huimeng Lei, Zhangming Yan, Xiaohong Sun, Yue Zhang, Jianhong Wang, Caihong Ma, Qunyuan Xu, Rui Wang, Erich D Jarvis, Zhirong Sun
Human and several nonhuman species share the rare ability of modifying acoustic and/or syntactic features of sounds produced, i.e. vocal learning, which is the important neurobiological and behavioral substrate of human speech/language. This convergent trait was suggested to be associated with significant genomic convergence and best manifested at the ROBO-SLIT axon guidance pathway. Here we verified the significance of such genomic convergence and assessed its functional relevance to human speech/language using human genetic variation data...
July 7, 2017: Brain and Language
https://www.readbyqxmd.com/read/28682084/recovery-from-forward-masking-in-cochlear-implant-listeners-depends-on-stimulation-mode-level-and-electrode-location
#11
Monita Chatterjee, Aditya M Kulkarni
Psychophysical recovery from forward masking was measured in adult cochlear implant users of Cochlear(TM) and Advanced Bionics(TM) devices, in monopolar and in focused (bipolar and tripolar) stimulation modes, at four electrode sites across the arrays, and at two levels (loudness balanced across modes and electrodes). Results indicated a steeper psychophysical recovery from forward masking in monopolar over bipolar and tripolar modes, modified by differential effects of electrode and level. The interactions between factors varied somewhat across devices...
May 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28679275/an-investigation-of-the-systematic-use-of-spectral-information-in-the-determination-of-apparent-talker-height
#12
Santiago Barreda
The perception of apparent-talker height is mostly determined by the fundamental frequency (f0) and spectral characteristics of a voice. Although it is traditionally thought that spectral cues affect apparent-talker height by influencing apparent vocal-tract length, a recent experiment [Barreda (2016). J. Phon. 55, 1-18] suggests that apparent-talker height can vary significantly within-talker on the basis of phonemically-determined spectral variability. In this experiment, listeners were asked to estimate the height of 10 female talkers based on manipulated natural productions of bVd words containing one of /i æ ɑ u ɝ/...
June 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28679261/long-short-term-memory-for-speaker-generalization-in-supervised-speech-separation
#13
Jitong Chen, DeLiang Wang
Speech separation can be formulated as learning to estimate a time-frequency mask from acoustic features extracted from noisy speech. For supervised speech separation, generalization to unseen noises and unseen speakers is a critical issue. Although deep neural networks (DNNs) have been successful in noise-independent speech separation, DNNs are limited in modeling a large number of speakers. To improve speaker generalization, a separation model based on long short-term memory (LSTM) is proposed, which naturally accounts for temporal dynamics of speech...
June 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28679252/acoustic-correlates-of-sexual-orientation-and-gender-role-self-concept-in-women-s-speech
#14
Sven Kachel, Adrian P Simpson, Melanie C Steffens
Compared to studies of male speakers, relatively few studies have investigated acoustic correlates of sexual orientation in women. The present investigation focuses on shedding more light on intra-group variability in lesbians and straight women by using a fine-grained analysis of sexual orientation and collecting data on psychological characteristics (e.g., gender-role self-concept). For a large-scale women's sample (overall n = 108), recordings of spontaneous and read speech were analyzed for median fundamental frequency and acoustic vowel space features...
June 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28679008/the-effect-of-background-noise-on-intelligibility-of-dysphonic-speech
#15
Keiko Ishikawa, Suzanne Boyce, Lisa Kelchner, Maria Golla Powell, Heidi Schieve, Alessandro de Alarcon, Sid Khosla
Purpose: The aim of this study is to determine the effect of background noise on the intelligibility of dysphonic speech and to examine the relationship between intelligibility in noise and an acoustic measure of dysphonia: cepstral peak prominence (CPP). Method: A study of speech perception was conducted using speech samples from 6 adult speakers with typical voice and 6 adult speakers with dysphonia. Speech samples were presented to 30 listeners with typical hearing in 3 noise conditions: quiet, signal-to-noise ratio (SNR)+5, and SNR+0...
July 5, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/28678831/robustness-of-speech-intelligibility-at-moderate-levels-of-spectral-degradation
#16
Sierra Broussard, Gregory Hickok, Kourosh Saberi
The current study investigated how amplitude and phase information differentially contribute to speech intelligibility. Listeners performed a word-identification task after hearing spectrally degraded sentences. Each stimulus was degraded by first dividing it into segments, then the amplitude and phase components of each segment were decorrelated independently to various degrees relative to those of the original segment. Segments were then concatenated into their original sequence to present to the listener...
2017: PloS One
https://www.readbyqxmd.com/read/28678819/compensations-to-auditory-feedback-perturbations-in-congenitally-blind-and-sighted-speakers-acoustic-and-articulatory-data
#17
Pamela Trudeau-Fisette, Mark Tiede, Lucie Ménard
This study investigated the effects of visual deprivation on the relationship between speech perception and production by examining compensatory responses to real-time perturbations in auditory feedback. Specifically, acoustic and articulatory data were recorded while sighted and congenitally blind French speakers produced several repetitions of the vowel /ø/. At the acoustic level, blind speakers produced larger compensatory responses to altered vowels than their sighted peers. At the articulatory level, blind speakers also produced larger displacements of the upper lip, the tongue tip, and the tongue dorsum in compensatory responses...
2017: PloS One
https://www.readbyqxmd.com/read/28658285/shared-acoustic-codes-underlie-emotional-communication-in-music-and-speech-evidence-from-deep-transfer-learning
#18
Eduardo Coutinho, Björn Schuller
Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems...
2017: PloS One
https://www.readbyqxmd.com/read/28657955/the-hannover-coupler-controlled-static-prestress-in-round-window-stimulation-with-the-floating-mass-transducer
#19
Mathias Müller, Rolf Salcher, Thomas Lenarz, Hannes Maier
INTRODUCTION: Stimulation of the cochlear round window (RW) with the floating mass transducer (FMT) still suffers from large variation in clinical outcomes. Beside the geometric mismatch between RW and FMT diameter that is a known limiting factor in achieving optimal coupling between actuator and RW membrane, the applied static force between FMT and RW is usually undefined. In this study, the feasibility and efficacy of a specially designed FMT coupler permitting application of static preloads to the RW membrane to optimize FMT-RW coupling was investigated...
June 27, 2017: Otology & Neurotology
https://www.readbyqxmd.com/read/28655050/enhancing-intervention-for-residual-rhotic-errors-via-app-delivered-biofeedback-a-case-study
#20
Tara McAllister Byun, Heather Campbell, Helen Carey, Wendy Liang, Tae Hong Park, Mario Svirsky
Purpose: Recent research suggests that visual-acoustic biofeedback can be an effective treatment for residual speech errors, but adoption remains limited due to barriers including high cost and lack of familiarity with the technology. This case study reports results from the first participant to complete a course of visual-acoustic biofeedback using a not-for-profit iOS app, Speech Therapist's App for /r/ Treatment. Method: App-based biofeedback treatment for rhotic misarticulation was provided in weekly 30-min sessions for 20 weeks...
June 22, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
keyword
keyword
59607
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"