Read by QxMD icon Read

"Speech acoustics"

Santiago Barreda
The perception of apparent-talker height is mostly determined by the fundamental frequency (f0) and spectral characteristics of a voice. Although it is traditionally thought that spectral cues affect apparent-talker height by influencing apparent vocal-tract length, a recent experiment [Barreda (2016). J. Phon. 55, 1-18] suggests that apparent-talker height can vary significantly within-talker on the basis of phonemically-determined spectral variability. In this experiment, listeners were asked to estimate the height of 10 female talkers based on manipulated natural productions of bVd words containing one of /i æ ɑ u ɝ/...
June 2017: Journal of the Acoustical Society of America
Hye-Young Bang
In speech articulation, a segment with high coarticulatory resistance in tongue configurations tends to exhibit greater coarticulatory aggressiveness on neighbouring segments. This study examined whether this articulatory relationship can be acoustically captured through locus equations and the magnitude of vowel dispersion. This question was investigated in CV sequences in English where C varies in the degree of articulatory constraints imposed on the tongue dorsum. The results show a tight relationship between locus equation slopes and vowel dispersion, where coarticulatory resistance and aggressiveness appear to be two sides of the same coin in speech acoustics...
April 2017: Journal of the Acoustical Society of America
Avril Treille, Coriandre Vilain, Thomas Hueber, Laurent Lamalle, Marc Sato
Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both "audible" and visible...
March 2017: Journal of Cognitive Neuroscience
Mishaela DiNino, Richard A Wright, Matthew B Winn, Julie Arenberg Bierer
Suboptimal interfaces between cochlear implant (CI) electrodes and auditory neurons result in a loss or distortion of spectral information in specific frequency regions, which likely decreases CI users' speech identification performance. This study exploited speech acoustics to model regions of distorted CI frequency transmission to determine the perceptual consequences of suboptimal electrode-neuron interfaces. Normal hearing adults identified naturally spoken vowels and consonants after spectral information was manipulated through a noiseband vocoder: either (1) low-, middle-, or high-frequency regions of information were removed by zeroing the corresponding channel outputs, or (2) the same regions were distorted by splitting filter outputs to neighboring filters...
December 2016: Journal of the Acoustical Society of America
Antje S Mefferd
The degree of speech movement pattern consistency can provide information about speech motor control. Although tongue motor control is particularly important because of the tongue's primary contribution to the speech acoustic signal, capturing tongue movements during speech remains difficult and costly. This study sought to determine if formant movements could be used to estimate tongue movement pattern consistency indirectly. Two age groups (seven young adults and seven older adults) and six speech conditions (typical, slow, loud, clear, fast, bite block speech) were selected to elicit an age- and task-dependent performance range in tongue movement pattern consistency...
November 2016: Journal of the Acoustical Society of America
Jinhee Ha, Iel-Yong Sung, Jang-Ho Son, Maureen Stone, Robert Ord, Yeong-Cheol Cho
Objective: Since the tongue is the oral structure responsible for mastication, pronunciation, and swallowing functions, patients who undergo glossectomy can be affected in various aspects of these functions. The vowel /i/ uses the tongue shape, whereas /u/ uses tongue and lip shapes. The purpose of this study is to investigate the morphological changes of the tongue and the adaptation of pronunciation using cine MRI for speech of patients who undergo glossectomy. Material and Methods: Twenty-three controls (11 males and 12 females) and 13 patients (eight males and five females) volunteered to participate in the experiment...
September 2016: Journal of Applied Oral Science: Revista FOB
Bruce R Gerratt, Jody Kreiman, Marc Garellek
Purpose: The question of what type of utterance-a sustained vowel or continuous speech-is best for voice quality analysis has been extensively studied but with equivocal results. This study examines whether previously reported differences derive from the articulatory and prosodic factors occurring in continuous speech versus sustained phonation. Method: Speakers with voice disorders sustained vowels and read sentences. Vowel samples were excerpted from the steadiest portion of each vowel in the sentences...
October 1, 2016: Journal of Speech, Language, and Hearing Research: JSLHR
Lars Meyer, Molly J Henry, Phoebe Gaston, Noura Schmuck, Angela D Friederici
Language comprehension requires that single words be grouped into syntactic phrases, as words in sentences are too many to memorize individually. In speech, acoustic and syntactic grouping patterns mostly align. However, when ambiguous sentences allow for alternative grouping patterns, comprehenders may form phrases that contradict speech prosody. While delta-band oscillations are known to track prosody, we hypothesized that linguistic grouping bias can modulate the interpretational impact of speech prosody in ambiguous situations, which should surface in delta-band oscillations when grouping patterns chosen by comprehenders differ from those indicated by prosody...
August 27, 2016: Cerebral Cortex
Jason A Whitfield, Alexander M Goberman
PURPOSE: The current investigation examined the relationship between perceptual ratings of speech clarity and acoustic measures of speech production. Included among the acoustic measures was the Articulatory-Acoustic Vowel Space (AAVS), which provides a measure of working formant space derived from continuously sampled formant trajectories in connected speech. METHOD: Acoustic measures of articulation and listener ratings of speech clarity were obtained from habitual and clear speech samples produced by 10 neurologically healthy adults...
June 21, 2016: International Journal of Speech-language Pathology
Takayuki Ito, Joshua H Coppola, David J Ostry
In the present paper, we present evidence for the idea that speech motor learning is accompanied by changes to the neural coding of both auditory and somatosensory stimuli. Participants in our experiments undergo adaptation to altered auditory feedback, an experimental model of speech motor learning which like visuo-motor adaptation in limb movement, requires that participants change their speech movements and associated somatosensory inputs to correct for systematic real-time changes to auditory feedback. We measure the sensory effects of adaptation by examining changes to auditory and somatosensory event-related responses...
2016: Scientific Reports
Mark Sayles, Michael K Walls, Michael G Heinz
The compressive nonlinearity of cochlear signal transduction, reflecting outer-hair-cell function, manifests as suppressive spectral interactions; e.g., two-tone suppression. Moreover, for broadband sounds, there are multiple interactions between frequency components. These frequency-dependent nonlinearities are important for neural coding of complex sounds, such as speech. Acoustic-trauma-induced outer-hair-cell damage is associated with loss of nonlinearity, which auditory prostheses attempt to restore with, e...
2016: Advances in Experimental Medicine and Biology
Kristofer E Bouchard, David F Conant, Gopala K Anumanchipalli, Benjamin Dichter, Kris S Chaisanguanthum, Keith Johnson, Edward F Chang
A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial--especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data...
2016: PloS One
Eric J Hunter, Pasquale Bottalico, Simone Graetzer, Timothy W Leishman, Mark L Berardi, Nathan G Eyring, Zachary R Jensen, Michael K Rolins, Jennifer K Whiting
School teachers have an elevated risk of voice problems due to the vocal demands in the workplace. This manuscript presents the results of three studies investigating teachers' voice use at work. In the first study, 57 teachers were observed for 2 weeks (waking hours) to compare how they used their voice in the school environment and in non-school environments. In a second study, 45 participants performed a short vocal task in two different rooms: a variable acoustic room and an anechoic chamber. Subjects were taken back and forth between the two rooms...
November 2015: Energy Procedia
Marie Klopfenstein
This study investigated the acoustic basis of across-utterance, within-speaker variation in speech naturalness for four speakers with dysarthria secondary to Parkinson's disease (PD). Speakers read sentences and produced spontaneous speech. Acoustic measures of fundamental frequency, phrase-final syllable lengthening, intensity and speech rate were obtained. A group of listeners judged speech naturalness using a nine-point Likert scale. Relationships between judgements of speech naturalness and acoustic measures were determined for individual speakers with PD...
2015: Clinical Linguistics & Phonetics
Houri K Vorperian, Sara L Kurtzweil, Marios Fourakis, Ray D Kent, Katelyn K Tillman, Diane Austin
The anatomic basis and articulatory features of speech production are often studied with imaging studies that are typically acquired in the supine body position. It is important to determine if changes in body orientation to the gravitational field alter vocal tract dimensions and speech acoustics. The purpose of this study was to assess the effect of body position (upright versus supine) on (1) oral and pharyngeal measurements derived from acoustic pharyngometry and (2) acoustic measurements of fundamental frequency (F0) and the first four formant frequencies (F1-F4) for the quadrilateral point vowels...
August 2015: Journal of the Acoustical Society of America
Peter S Kaplan, Christina M Danko, Anna M Cejka, Kevin D Everhart
The hypothesis that the associative learning-promoting effects of infant-directed speech (IDS) depend on infants' social experience was tested in a conditioned-attention paradigm with a cumulative sample of 4- to 14-month-old infants. Following six forward pairings of a brief IDS segment and a photographic slide of a smiling female face, infants of clinically depressed mothers exhibited evidence of having acquired significantly weaker voice-face associations than infants of non-depressed mothers. Regression analyses revealed that maternal depression was significantly related to infant learning even after demographic correlates of depression, antidepressant medication use, and extent of pitch modulation in maternal IDS had been taken into account...
November 2015: Infant Behavior & Development
Ran Liu, Lori L Holt
Speech perception depends on long-term representations that reflect regularities of the native language. However, listeners rapidly adapt when speech acoustics deviate from these regularities due to talker idiosyncrasies such as foreign accents and dialects. To better understand these dual aspects of speech perception, we probe native English listeners' baseline perceptual weighting of 2 acoustic dimensions (spectral quality and vowel duration) toward vowel categorization and examine how they subsequently adapt to an "artificial accent" that deviates from English norms in the correlation between the 2 dimensions...
December 2015: Journal of Experimental Psychology. Human Perception and Performance
Takashi Mitsuya, Ewen N MacDonald, Kevin G Munhall, David W Purcell
Past studies have shown that speakers spontaneously adjust their speech acoustics in response to their auditory feedback perturbed in real time. In the case of formant perturbation, the majority of studies have examined speaker's compensatory production using the English vowel /ɛ/ as in the word "head." Consistent behavioral observations have been reported, and there is lively discussion as to how the production system integrates auditory versus somatosensory feedback to control vowel production. However, different vowels have different oral sensation and proprioceptive information due to differences in the degree of lingual contact or jaw openness...
July 2015: Journal of the Acoustical Society of America
Abin Kuruvilla-Mathew, Suzanne C Purdy, David Welch
OBJECTIVE: To investigate speech stimuli and background-noise-dependent changes in cortical auditory-evoked potentials (CAEPs) in unaided and aided conditions, and determine amplification effects on CAEPs. DESIGN: CAEPs to naturally produced syllables in quiet and in multi-talker babble were recorded, with and without a hearing aid in the right ear. At least 300 artifact-free trials for each participant were required to measure latencies and amplitudes of CAEPs...
2015: International Journal of Audiology
Gavin M Bidelman, Chia-Cheng Lee
Categorical perception (CP) represents a fundamental process in converting continuous speech acoustics into invariant percepts. Using scalp-recorded event-related brain potentials (ERPs), we investigated how tone-language experience and stimulus context influence the CP for lexical tones-pitch patterns used by a majority of the world's languages to signal word meaning. Stimuli were vowel pairs overlaid with a high-level tone (T1) followed by a pitch continuum spanning between dipping (T3) and rising (T2) contours of the Mandarin tonal space...
October 15, 2015: NeuroImage
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"