Read by QxMD icon Read

"Speech acoustics"

Ja Young Choi, Elly R Hu, Tyler K Perrachione
The nondeterministic relationship between speech acoustics and abstract phonemic representations imposes a challenge for listeners to maintain perceptual constancy despite the highly variable acoustic realization of speech. Talker normalization facilitates speech processing by reducing the degrees of freedom for mapping between encountered speech and phonemic representations. While this process has been proposed to facilitate the perception of ambiguous speech sounds, it is currently unknown whether talker normalization is affected by the degree of potential ambiguity in acoustic-phonemic mapping...
February 7, 2018: Attention, Perception & Psychophysics
Albert Rilliard, Christophe d'Alessandro, Marc Evrard
Acoustic variation in expressive speech at the syllable level is studied. As emotions or attitudes can be conveyed by short spoken words, analysis of paradigmatic variations in vowels is an important issue to characterize the expressive content of such speech segments. The corpus contains 160 sentences produced under seven expressive conditions (Neutral, Anger, Fear, Surprise, Sensuality, Joy, Sadness) acted by a French female speaker (a total of 1120 sentences, 13 140 vowels). Eleven base acoustic parameters are selected for voice source and vocal tract related feature analysis...
January 2018: Journal of the Acoustical Society of America
Vincent Martel-Sauvageau, Kris Tjaden
PURPOSE: Deep Brain Stimulation of the subthalamic nucleus (STN-DBS) effectively treats cardinal symptoms of idiopathic Parkinson's disease (PD) that cannot be satisfactorily managed with medication. Research is equivocal regarding speech changes associated with STN-DBS. This study investigated the impact of STN-DBS on vocalic transitions and the relationship to intelligibility. METHODS: Eight Quebec-French speakers with PD and eight healthy controls participated...
October 7, 2017: Journal of Communication Disorders
Md Nasir, Brian Robert Baucom, Panayiotis Georgiou, Shrikanth Narayanan
Automated assessment and prediction of marital outcome in couples therapy is a challenging task but promises to be a potentially useful tool for clinical psychologists. Computational approaches for inferring therapy outcomes using observable behavioral information obtained from conversations between spouses offer objective means for understanding relationship dynamics. In this work, we explore whether the acoustics of the spoken interactions of clinically distressed spouses provide information towards assessment of therapy outcomes...
2017: PloS One
Santiago Barreda
The perception of apparent-talker height is mostly determined by the fundamental frequency (f0) and spectral characteristics of a voice. Although it is traditionally thought that spectral cues affect apparent-talker height by influencing apparent vocal-tract length, a recent experiment [Barreda (2016). J. Phon. 55, 1-18] suggests that apparent-talker height can vary significantly within-talker on the basis of phonemically-determined spectral variability. In this experiment, listeners were asked to estimate the height of 10 female talkers based on manipulated natural productions of bVd words containing one of /i æ ɑ u ɝ/...
June 2017: Journal of the Acoustical Society of America
Hye-Young Bang
In speech articulation, a segment with high coarticulatory resistance in tongue configurations tends to exhibit greater coarticulatory aggressiveness on neighbouring segments. This study examined whether this articulatory relationship can be acoustically captured through locus equations and the magnitude of vowel dispersion. This question was investigated in CV sequences in English where C varies in the degree of articulatory constraints imposed on the tongue dorsum. The results show a tight relationship between locus equation slopes and vowel dispersion, where coarticulatory resistance and aggressiveness appear to be two sides of the same coin in speech acoustics...
April 2017: Journal of the Acoustical Society of America
Avril Treille, Coriandre Vilain, Thomas Hueber, Laurent Lamalle, Marc Sato
Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both "audible" and visible...
March 2017: Journal of Cognitive Neuroscience
Mishaela DiNino, Richard A Wright, Matthew B Winn, Julie Arenberg Bierer
Suboptimal interfaces between cochlear implant (CI) electrodes and auditory neurons result in a loss or distortion of spectral information in specific frequency regions, which likely decreases CI users' speech identification performance. This study exploited speech acoustics to model regions of distorted CI frequency transmission to determine the perceptual consequences of suboptimal electrode-neuron interfaces. Normal hearing adults identified naturally spoken vowels and consonants after spectral information was manipulated through a noiseband vocoder: either (1) low-, middle-, or high-frequency regions of information were removed by zeroing the corresponding channel outputs, or (2) the same regions were distorted by splitting filter outputs to neighboring filters...
December 2016: Journal of the Acoustical Society of America
Antje S Mefferd
The degree of speech movement pattern consistency can provide information about speech motor control. Although tongue motor control is particularly important because of the tongue's primary contribution to the speech acoustic signal, capturing tongue movements during speech remains difficult and costly. This study sought to determine if formant movements could be used to estimate tongue movement pattern consistency indirectly. Two age groups (seven young adults and seven older adults) and six speech conditions (typical, slow, loud, clear, fast, bite block speech) were selected to elicit an age- and task-dependent performance range in tongue movement pattern consistency...
November 2016: Journal of the Acoustical Society of America
Jinhee Ha, Iel-Yong Sung, Jang-Ho Son, Maureen Stone, Robert Ord, Yeong-Cheol Cho
Objective: Since the tongue is the oral structure responsible for mastication, pronunciation, and swallowing functions, patients who undergo glossectomy can be affected in various aspects of these functions. The vowel /i/ uses the tongue shape, whereas /u/ uses tongue and lip shapes. The purpose of this study is to investigate the morphological changes of the tongue and the adaptation of pronunciation using cine MRI for speech of patients who undergo glossectomy. Material and Methods: Twenty-three controls (11 males and 12 females) and 13 patients (eight males and five females) volunteered to participate in the experiment...
September 2016: Journal of Applied Oral Science: Revista FOB
Bruce R Gerratt, Jody Kreiman, Marc Garellek
Purpose: The question of what type of utterance-a sustained vowel or continuous speech-is best for voice quality analysis has been extensively studied but with equivocal results. This study examines whether previously reported differences derive from the articulatory and prosodic factors occurring in continuous speech versus sustained phonation. Method: Speakers with voice disorders sustained vowels and read sentences. Vowel samples were excerpted from the steadiest portion of each vowel in the sentences...
October 1, 2016: Journal of Speech, Language, and Hearing Research: JSLHR
Lars Meyer, Molly J Henry, Phoebe Gaston, Noura Schmuck, Angela D Friederici
Language comprehension requires that single words be grouped into syntactic phrases, as words in sentences are too many to memorize individually. In speech, acoustic and syntactic grouping patterns mostly align. However, when ambiguous sentences allow for alternative grouping patterns, comprehenders may form phrases that contradict speech prosody. While delta-band oscillations are known to track prosody, we hypothesized that linguistic grouping bias can modulate the interpretational impact of speech prosody in ambiguous situations, which should surface in delta-band oscillations when grouping patterns chosen by comprehenders differ from those indicated by prosody...
September 1, 2017: Cerebral Cortex
Jason A Whitfield, Alexander M Goberman
PURPOSE: The current investigation examined the relationship between perceptual ratings of speech clarity and acoustic measures of speech production. Included among the acoustic measures was the Articulatory-Acoustic Vowel Space (AAVS), which provides a measure of working formant space derived from continuously sampled formant trajectories in connected speech. METHOD: Acoustic measures of articulation and listener ratings of speech clarity were obtained from habitual and clear speech samples produced by 10 neurologically healthy adults...
April 2017: International Journal of Speech-language Pathology
Takayuki Ito, Joshua H Coppola, David J Ostry
In the present paper, we present evidence for the idea that speech motor learning is accompanied by changes to the neural coding of both auditory and somatosensory stimuli. Participants in our experiments undergo adaptation to altered auditory feedback, an experimental model of speech motor learning which like visuo-motor adaptation in limb movement, requires that participants change their speech movements and associated somatosensory inputs to correct for systematic real-time changes to auditory feedback. We measure the sensory effects of adaptation by examining changes to auditory and somatosensory event-related responses...
2016: Scientific Reports
Mark Sayles, Michael K Walls, Michael G Heinz
The compressive nonlinearity of cochlear signal transduction, reflecting outer-hair-cell function, manifests as suppressive spectral interactions; e.g., two-tone suppression. Moreover, for broadband sounds, there are multiple interactions between frequency components. These frequency-dependent nonlinearities are important for neural coding of complex sounds, such as speech. Acoustic-trauma-induced outer-hair-cell damage is associated with loss of nonlinearity, which auditory prostheses attempt to restore with, e...
2016: Advances in Experimental Medicine and Biology
Kristofer E Bouchard, David F Conant, Gopala K Anumanchipalli, Benjamin Dichter, Kris S Chaisanguanthum, Keith Johnson, Edward F Chang
A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial--especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data...
2016: PloS One
Eric J Hunter, Pasquale Bottalico, Simone Graetzer, Timothy W Leishman, Mark L Berardi, Nathan G Eyring, Zachary R Jensen, Michael K Rolins, Jennifer K Whiting
School teachers have an elevated risk of voice problems due to the vocal demands in the workplace. This manuscript presents the results of three studies investigating teachers' voice use at work. In the first study, 57 teachers were observed for 2 weeks (waking hours) to compare how they used their voice in the school environment and in non-school environments. In a second study, 45 participants performed a short vocal task in two different rooms: a variable acoustic room and an anechoic chamber. Subjects were taken back and forth between the two rooms...
November 2015: Energy Procedia
Marie Klopfenstein
This study investigated the acoustic basis of across-utterance, within-speaker variation in speech naturalness for four speakers with dysarthria secondary to Parkinson's disease (PD). Speakers read sentences and produced spontaneous speech. Acoustic measures of fundamental frequency, phrase-final syllable lengthening, intensity and speech rate were obtained. A group of listeners judged speech naturalness using a nine-point Likert scale. Relationships between judgements of speech naturalness and acoustic measures were determined for individual speakers with PD...
2015: Clinical Linguistics & Phonetics
Houri K Vorperian, Sara L Kurtzweil, Marios Fourakis, Ray D Kent, Katelyn K Tillman, Diane Austin
The anatomic basis and articulatory features of speech production are often studied with imaging studies that are typically acquired in the supine body position. It is important to determine if changes in body orientation to the gravitational field alter vocal tract dimensions and speech acoustics. The purpose of this study was to assess the effect of body position (upright versus supine) on (1) oral and pharyngeal measurements derived from acoustic pharyngometry and (2) acoustic measurements of fundamental frequency (F0) and the first four formant frequencies (F1-F4) for the quadrilateral point vowels...
August 2015: Journal of the Acoustical Society of America
Peter S Kaplan, Christina M Danko, Anna M Cejka, Kevin D Everhart
The hypothesis that the associative learning-promoting effects of infant-directed speech (IDS) depend on infants' social experience was tested in a conditioned-attention paradigm with a cumulative sample of 4- to 14-month-old infants. Following six forward pairings of a brief IDS segment and a photographic slide of a smiling female face, infants of clinically depressed mothers exhibited evidence of having acquired significantly weaker voice-face associations than infants of non-depressed mothers. Regression analyses revealed that maternal depression was significantly related to infant learning even after demographic correlates of depression, antidepressant medication use, and extent of pitch modulation in maternal IDS had been taken into account...
November 2015: Infant Behavior & Development
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"