keyword
MENU ▼
Read by QxMD icon Read
search

Speech acoustics

keyword
https://www.readbyqxmd.com/read/29471380/dysarthria-in-mandarin-speaking-children-with-cerebral-palsy-speech-subsystem-profiles
#1
Li-Mei Chen, Katherine C Hustad, Ray D Kent, Yu Ching Lin
Purpose: This study explored the speech characteristics of Mandarin-speaking children with cerebral palsy (CP) and typically developing (TD) children to determine (a) how children in the 2 groups may differ in their speech patterns and (b) the variables correlated with speech intelligibility for words and sentences. Method: Data from 6 children with CP and a clinical diagnosis of moderate dysarthria were compared with data from 9 TD children using a multiple speech subsystems approach...
February 22, 2018: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/29471373/deep-brain-stimulation-of-the-subthalamic-nucleus-parameter-optimization-for-vowel-acoustics-and-speech-intelligibility-in-parkinson-s-disease
#2
Thea Knowles, Scott Adams, Anita Abeyesekera, Cynthia Mancinelli, Greydon Gilmore, Mandar Jog
Purpose: The settings of 3 electrical stimulation parameters were adjusted in 12 speakers with Parkinson's disease (PD) with deep brain stimulation of the subthalamic nucleus (STN-DBS) to examine their effects on vowel acoustics and speech intelligibility. Method: Participants were tested under permutations of low, mid, and high STN-DBS frequency, voltage, and pulse width settings. At each session, participants recited a sentence. Acoustic characteristics of vowel production were extracted, and naive listeners provided estimates of speech intelligibility...
February 22, 2018: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/29466556/acoustic-predictors-of-pediatric-dysarthria-in-cerebral-palsy
#3
Kristen M Allison, Katherine C Hustad
Purpose: The objectives of this study were to identify acoustic characteristics of connected speech that differentiate children with dysarthria secondary to cerebral palsy (CP) from typically developing children and to identify acoustic measures that best detect dysarthria in children with CP. Method: Twenty 5-year-old children with dysarthria secondary to CP were compared to 20 age- and sex-matched typically developing children on 5 acoustic measures of connected speech...
February 20, 2018: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/29465335/fourteen-month-olds-sensitivity-to-acoustic-salience-in-minimal-pair-word-learning
#4
Stephanie L Archer, Suzanne Curtin
During the first two years of life, infants concurrently refine native-language speech categories and word learning skills. However, in the Switch Task, 14-month-olds do not detect minimal contrasts in a novel object-word pairing (Stager & Werker, 1997). We investigate whether presenting infants with acoustically salient contrasts (liquids) facilitates success in the Switch Task. The first two experiments demonstrate that acoustic differences boost infants' detection of contrasts. However, infants cannot detect the contrast when the segments are digitally shortened...
February 21, 2018: Journal of Child Language
https://www.readbyqxmd.com/read/29460262/physiologic-effects-of-voice-stimuli-in-conscious-and-unconscious-palliative-patients-a-pilot-study
#5
Kerstin Buchholz, Patrick Liebl, Christian Keinki, Natalie Herth, Jutta Huebner
BACKGROUND: Sounds and acoustic stimuli can have an effect on human beings. In medical care, sounds are often used as parts of therapies, e. g., in different types of music therapies. Also, human speech greatly affects the mental status. Although calming sounds and music are widely established in the medical field, clear evidence for the effect of sounds in palliative care is scare, and data about effects of the human voice in general are still missing. Thus, the aim of this study was to evaluate the effects of different voice stimuli on palliative patients...
February 19, 2018: Wiener Medizinische Wochenschrift
https://www.readbyqxmd.com/read/29457553/the-difficulty-of-articulatory-complexity
#6
Marianne Pouplier, Stefania Marin, Alexei Kochetov
In our commentary, we offer some support for the view that frequency rather than a language-independent definition of complexity is a main factor determining speech production in healthy adults. We further discuss the limits of defining articulatory complexity based on transcription data. If we want to gauge the impact of substantive constraints on speech production, context-specific production dynamics should be considered, as has been underscored by articulatory-acoustic work on speech errors.
October 2017: Cognitive Neuropsychology
https://www.readbyqxmd.com/read/29454176/efficacy-of-intensive-voice-feminisation-therapy-in-a-transgender-young-offender
#7
Sterling Quinn, Nathaniel Swain
Research suggests that transgender young offenders are a uniquely vulnerable caseload that may benefit from speech pathology intervention to help bring their voice into alignment with their gender identity. However, no previous studies have investigated treatment efficacy in this population. This study investigated the impact of intensive voice feminisation therapy targeting fundamental frequency and oral resonance in a 17 year old transgender individual within a youth justice institution. Acoustic analysis, listener and self-ratings of vocal femininity, self-ratings of vocal satisfaction, a post-treatment structured interview, and pre- and post- treatment completion of the Transsexual Voice Questionnaire (TVQ MtF ) were utilised to determine treatment impact...
February 8, 2018: Journal of Communication Disorders
https://www.readbyqxmd.com/read/29451107/bilaterally-combined-electric-and-acoustic-hearing-in-mandarin-speaking-listeners-the-population-with-poor-residual-hearing
#8
Duo-Duo Tao, Ji-Sheng Liu, Zhen-Dong Yang, Blake S Wilson, Ning Zhou
The hearing loss criterion for cochlear implant candidacy in mainland China is extremely stringent (bilateral severe to profound hearing loss), resulting in few patients with substantial residual hearing in the nonimplanted ear. The main objective of the current study was to examine the benefit of bimodal hearing in typical Mandarin-speaking implant users who have poorer residual hearing in the nonimplanted ear relative to those used in the English-speaking studies. Seventeen Mandarin-speaking bimodal users with pure-tone averages of ∼80 dB HL participated in the study...
January 2018: Trends in Hearing
https://www.readbyqxmd.com/read/29450493/the-impact-of-age-background-noise-semantic-ambiguity-and-hearing-loss-on-recognition-memory-for-spoken-sentences
#9
Margaret A Koeritzer, Chad S Rogers, Kristin J Van Engen, Jonathan E Peelle
Purpose: The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. Method: We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible...
February 15, 2018: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/29449062/acoustic-and-perceptual-analyses-of-adductor-spasmodic-dysphonia-in-mandarin-speaking-chinese
#10
Zhipeng Chen, Jingyuan Li, Qingyi Ren, Pingjiang Ge
OBJECTIVE: The objective of this study was to examine the perceptual structure and acoustic characteristics of speech of patients with adductor spasmodic dysphonia (ADSD) in Mandarin. STUDY DESIGN: Case-Control Study MATERIALS AND METHODS: For the estimation of dysphonia level, perceptual and acoustic analysis were used for patients with ADSD (N = 20) and the control group (N = 20) that are Mandarin-Chinese speakers. For both subgroups, a sustained vowel and connected speech samples were obtained...
February 12, 2018: Journal of Voice: Official Journal of the Voice Foundation
https://www.readbyqxmd.com/read/29449060/the-aprosody-of-schizophrenia-computationally-derived-acoustic-phonetic-underpinnings-of-monotone-speech
#11
Michael T Compton, Anya Lunden, Sean D Cleary, Luca Pauselli, Yazeed Alolayan, Brooke Halpern, Beth Broussard, Anthony Crisafio, Leslie Capulong, Pierfrancesco Maria Balducci, Francesco Bernardini, Michael A Covington
OBJECTIVE: Acoustic phonetic methods are useful in examining some symptoms of schizophrenia; we used such methods to understand the underpinnings of aprosody. We hypothesized that, compared to controls and patients without clinically rated aprosody, patients with aprosody would exhibit reduced variability in: pitch (F0), jaw/mouth opening and tongue height (formant F1), tongue front/back position and/or lip rounding (formant F2), and intensity/loudness. METHODS: Audiorecorded speech was obtained from 98 patients (including 25 with clinically rated aprosody and 29 without) and 102 unaffected controls using five tasks: one describing a drawing, two based on spontaneous speech elicited through a question (Tasks 2 and 3), and two based on reading prose excerpts (Tasks 4 and 5)...
February 12, 2018: Schizophrenia Research
https://www.readbyqxmd.com/read/29446191/speech-understanding-in-noise-in-elderly-adults-the-effect-of-inhibitory-control-and-syntactic-complexity
#12
Eline C van Knijff, Martine Coene, Paul J Govaerts
BACKGROUND: Previous research has suggested that speech perception in elderly adults is influenced not only by age-related hearing loss or presbycusis but also by declines in cognitive abilities, by background noise and by the syntactic complexity of the message. AIMS: To gain further insight into the influence of these cognitive as well as acoustic and linguistic factors on speech perception in elderly adults by investigating inhibitory control as a listener characteristic and background noise type and syntactic complexity as input characteristics...
February 15, 2018: International Journal of Language & Communication Disorders
https://www.readbyqxmd.com/read/29442165/a-comparison-of-dysphonia-severity-index-and-acoustic-voice-quality-index-measures-in-differentiating-normal-and-dysphonic-voices
#13
Virgilijus Uloza, Ben Barsties V Latoszek, Nora Ulozaite-Staniene, Tadas Petrauskas, Youri Maryn
PURPOSE: The aim of the study was to investigate and compare the feasibility and robustness of the Acoustic Voice Quality Index (AVQI) and the Dysphonia Severity Index (DSI) in diagnostic accuracy, differentiating normal and dysphonic voices. METHODS: A group of 264 subjects with normal voices (n = 105) and with various voice disorders (n = 159) were asked to read aloud a text and to sustain the vowel /a/. Both speech tasks were concatenated, and perceptually rated for dysphonia severity by five voice clinicians...
February 13, 2018: European Archives of Oto-rhino-laryngology
https://www.readbyqxmd.com/read/29441835/current-profile-of-adults-presenting-for-preoperative-cochlear-implant-evaluation
#14
Jourdan T Holder, Susan M Reynolds, Linsey W Sunderhaus, René H Gifford
Considerable advancements in cochlear implant technology (e.g., electric acoustic stimulation) and assessment materials have yielded expanded criteria. Despite this, it is unclear whether individuals with better audiometric thresholds and speech understanding are being referred for cochlear implant workup and pursuing cochlear implantation. The purpose of this study was to characterize the mean auditory and demographic profile of adults presenting for preoperative cochlear implant workup. Data were collected prospectively for all adult preoperative workups at Vanderbilt from 2013 to 2015...
January 2018: Trends in Hearing
https://www.readbyqxmd.com/read/29432110/the-cochlear-implant-eeg-artifact-recorded-from-an-artificial-brain-for-complex-acoustic-stimuli
#15
Luise Wagner, Natasha Maurits, Bert Maat, Deniz Baskent, Anita E Wagner
Electroencephalographic (EEG) recordings provide objective estimates of listeners' cortical processing of sounds and of the status of their speech perception system. For profoundly deaf listeners with cochlear implants (CIs), the applications of EEG are limited because the device adds electric artifacts to the recordings. This restricts the possibilities for the neural-based metrics of speech processing by CI users, for instance to gauge cortical reorganization due to individual's hearing loss history. This paper describes the characteristics of the CI artifact as recorded with an artificial head substitute, and reports how the artifact is affected by the properties of the acoustical input signal versus the settings of the device...
February 2018: IEEE Transactions on Neural Systems and Rehabilitation Engineering
https://www.readbyqxmd.com/read/29430213/naplib-an-open-source-toolbox-for-real-time-and-offline-neural-acoustic-processing
#16
Bahar Khalighinejad, Tasha Nagamine, Ashesh Mehta, Nima Mesgarani
In this paper, we introduce the Neural Acoustic Processing Library (NAPLib), a toolbox containing novel processing methods for real-time and offline analysis of neural activity in response to speech. Our method divides the speech signal and resultant neural activity into segmental units (e.g., phonemes), allowing for fast and efficient computations that can be implemented in real-time. NAPLib contains a suite of tools that characterize various properties of the neural representation of speech, which can be used for functionality such as characterizing electrode tuning properties, brain mapping and brain computer interfaces...
March 2017: Proceedings of the ... IEEE International Conference on Acoustics, Speech, and Signal Processing
https://www.readbyqxmd.com/read/29430212/deep-attractor-network-for-single-microphone-speaker-separation
#17
Zhuo Chen, Yi Luo, Nima Mesgarani
Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source...
March 2017: Proceedings of the ... IEEE International Conference on Acoustics, Speech, and Signal Processing
https://www.readbyqxmd.com/read/29417453/non-selective-lexical-access-in-late-arabic-english-bilinguals-evidence-from-gating
#18
Sami Boudelaa
Previous research suggests that late bilinguals who speak typologically distant languages are the least likely to show evidence of non-selective lexical access processes. This study puts this claim to test by using the gating task to determine whether words beginning with speech sounds that are phonetically similar in Arabic and English (e.g., [b,d,m,n]) give rise to selective or non-selective lexical access processes in late Arabic-English bilinguals. The results show that an acoustic-phonetic input (e.g., [bæ]) that is consistent with words in Arabic (e...
February 7, 2018: Journal of Psycholinguistic Research
https://www.readbyqxmd.com/read/29417449/varying-acoustic-phonemic-ambiguity-reveals-that-talker-normalization-is-obligatory-in-speech-processing
#19
Ja Young Choi, Elly R Hu, Tyler K Perrachione
The nondeterministic relationship between speech acoustics and abstract phonemic representations imposes a challenge for listeners to maintain perceptual constancy despite the highly variable acoustic realization of speech. Talker normalization facilitates speech processing by reducing the degrees of freedom for mapping between encountered speech and phonemic representations. While this process has been proposed to facilitate the perception of ambiguous speech sounds, it is currently unknown whether talker normalization is affected by the degree of potential ambiguity in acoustic-phonemic mapping...
February 7, 2018: Attention, Perception & Psychophysics
https://www.readbyqxmd.com/read/29402437/auditory-prediction-during-speaking-and-listening
#20
Marc Sato, Douglas M Shiller
In the present EEG study, the role of auditory prediction in speech was explored through the comparison of auditory cortical responses during active speaking and passive listening to the same acoustic speech signals. Two manipulations of sensory prediction accuracy were used during the speaking task: (1) a real-time change in vowel F1 feedback (reducing prediction accuracy relative to unaltered feedback) and (2) presenting a stable auditory target rather than a visual cue to speak (enhancing auditory prediction accuracy during baseline productions, and potentially enhancing the perturbing effect of altered feedback)...
February 2, 2018: Brain and Language
keyword
keyword
59607
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"