Read by QxMD icon Read

"Speech acoustics"

John E Marsh, Robert Ljung, Helena Jahncke, Douglas MacCutcheon, Florian Pausch, Linden J Ball, François Vachon
Telephone conversation is ubiquitous within the office setting. Overhearing a telephone conversation-whereby only one of the two speakers is heard-is subjectively more annoying and objectively more distracting than overhearing a full conversation. The present study sought to determine whether this "halfalogue" effect is attributable to unexpected offsets and onsets within the background speech (acoustic unexpectedness) or to the tendency to predict the unheard part of the conversation (semantic [un]predictability), and whether these effects can be shielded against through top-down cognitive control...
June 2018: Journal of Experimental Psychology. Applied
Santiago Barreda, Peter F Assmann
Adult listeners were presented with /hVd/ syllables spoken by boys and girls ranging from 5 to 18 years of age. Half of the listeners were informed of the sex of the speaker; the other half were not. Results indicate that veridical age in children can be predicted accurately based on the acoustic characteristics of the talker's voice and that listener behavior is highly predictable on the basis of speech acoustics. Furthermore, listeners appear to incorporate assumptions about talker sex into their estimates of talker age, even when information about the talker's sex is not explicitly provided for them...
May 2018: Journal of the Acoustical Society of America
Kostas Konstantopoulos, Eleni Zamba-Papanicolaou, Kyproula Christodoulou
BACKGROUND: Dysarthrophonia is often reported by hereditary spastic paraplegia (HSP) patients with SPG11 mutations but it has been poorly investigated. OBJECTIVE: The goal of this study was to investigate dysarthrophonia in SPG11 patients using quantitative measures. The voice/speech of two patients and a non-affected mutation carrier was recorded and analyzed using electroglottography (EGG) and speech acoustics. RESULTS: Dysarthrophonia showed a higher standard deviation of the average fundamental frequency, a three to eight times higher jitter, a 80-110 Hz higher mean fundamental frequency, and a two times higher fundamental frequency range...
May 26, 2018: Neurological Sciences
Josh Chartier, Gopala K Anumanchipalli, Keith Johnson, Edward F Chang
When speaking, we dynamically coordinate movements of our jaw, tongue, lips, and larynx. To investigate the neural mechanisms underlying articulation, we used direct cortical recordings from human sensorimotor cortex while participants spoke natural sentences that included sounds spanning the entire English phonetic inventory. We used deep neural networks to infer speakers' articulator movements from produced speech acoustics. Individual electrodes encoded a diversity of articulatory kinematic trajectories (AKTs), each revealing coordinated articulator movements toward specific vocal tract shapes...
May 8, 2018: Neuron
Ja Young Choi, Elly R Hu, Tyler K Perrachione
The nondeterministic relationship between speech acoustics and abstract phonemic representations imposes a challenge for listeners to maintain perceptual constancy despite the highly variable acoustic realization of speech. Talker normalization facilitates speech processing by reducing the degrees of freedom for mapping between encountered speech and phonemic representations. While this process has been proposed to facilitate the perception of ambiguous speech sounds, it is currently unknown whether talker normalization is affected by the degree of potential ambiguity in acoustic-phonemic mapping...
April 2018: Attention, Perception & Psychophysics
Albert Rilliard, Christophe d'Alessandro, Marc Evrard
Acoustic variation in expressive speech at the syllable level is studied. As emotions or attitudes can be conveyed by short spoken words, analysis of paradigmatic variations in vowels is an important issue to characterize the expressive content of such speech segments. The corpus contains 160 sentences produced under seven expressive conditions (Neutral, Anger, Fear, Surprise, Sensuality, Joy, Sadness) acted by a French female speaker (a total of 1120 sentences, 13 140 vowels). Eleven base acoustic parameters are selected for voice source and vocal tract related feature analysis...
January 2018: Journal of the Acoustical Society of America
Vincent Martel-Sauvageau, Kris Tjaden
PURPOSE: Deep Brain Stimulation of the subthalamic nucleus (STN-DBS) effectively treats cardinal symptoms of idiopathic Parkinson's disease (PD) that cannot be satisfactorily managed with medication. Research is equivocal regarding speech changes associated with STN-DBS. This study investigated the impact of STN-DBS on vocalic transitions and the relationship to intelligibility. METHODS: Eight Quebec-French speakers with PD and eight healthy controls participated...
October 7, 2017: Journal of Communication Disorders
Md Nasir, Brian Robert Baucom, Panayiotis Georgiou, Shrikanth Narayanan
Automated assessment and prediction of marital outcome in couples therapy is a challenging task but promises to be a potentially useful tool for clinical psychologists. Computational approaches for inferring therapy outcomes using observable behavioral information obtained from conversations between spouses offer objective means for understanding relationship dynamics. In this work, we explore whether the acoustics of the spoken interactions of clinically distressed spouses provide information towards assessment of therapy outcomes...
2017: PloS One
Santiago Barreda
The perception of apparent-talker height is mostly determined by the fundamental frequency (f0) and spectral characteristics of a voice. Although it is traditionally thought that spectral cues affect apparent-talker height by influencing apparent vocal-tract length, a recent experiment [Barreda (2016). J. Phon. 55, 1-18] suggests that apparent-talker height can vary significantly within-talker on the basis of phonemically-determined spectral variability. In this experiment, listeners were asked to estimate the height of 10 female talkers based on manipulated natural productions of bVd words containing one of /i æ ɑ u ɝ/...
June 2017: Journal of the Acoustical Society of America
Hye-Young Bang
In speech articulation, a segment with high coarticulatory resistance in tongue configurations tends to exhibit greater coarticulatory aggressiveness on neighbouring segments. This study examined whether this articulatory relationship can be acoustically captured through locus equations and the magnitude of vowel dispersion. This question was investigated in CV sequences in English where C varies in the degree of articulatory constraints imposed on the tongue dorsum. The results show a tight relationship between locus equation slopes and vowel dispersion, where coarticulatory resistance and aggressiveness appear to be two sides of the same coin in speech acoustics...
April 2017: Journal of the Acoustical Society of America
Avril Treille, Coriandre Vilain, Thomas Hueber, Laurent Lamalle, Marc Sato
Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both "audible" and visible...
March 2017: Journal of Cognitive Neuroscience
Mishaela DiNino, Richard A Wright, Matthew B Winn, Julie Arenberg Bierer
Suboptimal interfaces between cochlear implant (CI) electrodes and auditory neurons result in a loss or distortion of spectral information in specific frequency regions, which likely decreases CI users' speech identification performance. This study exploited speech acoustics to model regions of distorted CI frequency transmission to determine the perceptual consequences of suboptimal electrode-neuron interfaces. Normal hearing adults identified naturally spoken vowels and consonants after spectral information was manipulated through a noiseband vocoder: either (1) low-, middle-, or high-frequency regions of information were removed by zeroing the corresponding channel outputs, or (2) the same regions were distorted by splitting filter outputs to neighboring filters...
December 2016: Journal of the Acoustical Society of America
Antje S Mefferd
The degree of speech movement pattern consistency can provide information about speech motor control. Although tongue motor control is particularly important because of the tongue's primary contribution to the speech acoustic signal, capturing tongue movements during speech remains difficult and costly. This study sought to determine if formant movements could be used to estimate tongue movement pattern consistency indirectly. Two age groups (seven young adults and seven older adults) and six speech conditions (typical, slow, loud, clear, fast, bite block speech) were selected to elicit an age- and task-dependent performance range in tongue movement pattern consistency...
November 2016: Journal of the Acoustical Society of America
Jinhee Ha, Iel-Yong Sung, Jang-Ho Son, Maureen Stone, Robert Ord, Yeong-Cheol Cho
Objective: Since the tongue is the oral structure responsible for mastication, pronunciation, and swallowing functions, patients who undergo glossectomy can be affected in various aspects of these functions. The vowel /i/ uses the tongue shape, whereas /u/ uses tongue and lip shapes. The purpose of this study is to investigate the morphological changes of the tongue and the adaptation of pronunciation using cine MRI for speech of patients who undergo glossectomy. Material and Methods: Twenty-three controls (11 males and 12 females) and 13 patients (eight males and five females) volunteered to participate in the experiment...
September 2016: Journal of Applied Oral Science: Revista FOB
Bruce R Gerratt, Jody Kreiman, Marc Garellek
Purpose: The question of what type of utterance-a sustained vowel or continuous speech-is best for voice quality analysis has been extensively studied but with equivocal results. This study examines whether previously reported differences derive from the articulatory and prosodic factors occurring in continuous speech versus sustained phonation. Method: Speakers with voice disorders sustained vowels and read sentences. Vowel samples were excerpted from the steadiest portion of each vowel in the sentences...
October 1, 2016: Journal of Speech, Language, and Hearing Research: JSLHR
Lars Meyer, Molly J Henry, Phoebe Gaston, Noura Schmuck, Angela D Friederici
Language comprehension requires that single words be grouped into syntactic phrases, as words in sentences are too many to memorize individually. In speech, acoustic and syntactic grouping patterns mostly align. However, when ambiguous sentences allow for alternative grouping patterns, comprehenders may form phrases that contradict speech prosody. While delta-band oscillations are known to track prosody, we hypothesized that linguistic grouping bias can modulate the interpretational impact of speech prosody in ambiguous situations, which should surface in delta-band oscillations when grouping patterns chosen by comprehenders differ from those indicated by prosody...
September 1, 2017: Cerebral Cortex
Jason A Whitfield, Alexander M Goberman
PURPOSE: The current investigation examined the relationship between perceptual ratings of speech clarity and acoustic measures of speech production. Included among the acoustic measures was the Articulatory-Acoustic Vowel Space (AAVS), which provides a measure of working formant space derived from continuously sampled formant trajectories in connected speech. METHOD: Acoustic measures of articulation and listener ratings of speech clarity were obtained from habitual and clear speech samples produced by 10 neurologically healthy adults...
April 2017: International Journal of Speech-language Pathology
Takayuki Ito, Joshua H Coppola, David J Ostry
In the present paper, we present evidence for the idea that speech motor learning is accompanied by changes to the neural coding of both auditory and somatosensory stimuli. Participants in our experiments undergo adaptation to altered auditory feedback, an experimental model of speech motor learning which like visuo-motor adaptation in limb movement, requires that participants change their speech movements and associated somatosensory inputs to correct for systematic real-time changes to auditory feedback. We measure the sensory effects of adaptation by examining changes to auditory and somatosensory event-related responses...
May 16, 2016: Scientific Reports
Mark Sayles, Michael K Walls, Michael G Heinz
The compressive nonlinearity of cochlear signal transduction, reflecting outer-hair-cell function, manifests as suppressive spectral interactions; e.g., two-tone suppression. Moreover, for broadband sounds, there are multiple interactions between frequency components. These frequency-dependent nonlinearities are important for neural coding of complex sounds, such as speech. Acoustic-trauma-induced outer-hair-cell damage is associated with loss of nonlinearity, which auditory prostheses attempt to restore with, e...
2016: Advances in Experimental Medicine and Biology
Kristofer E Bouchard, David F Conant, Gopala K Anumanchipalli, Benjamin Dichter, Kris S Chaisanguanthum, Keith Johnson, Edward F Chang
A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial--especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data...
2016: PloS One
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"