keyword
MENU ▼
Read by QxMD icon Read
search

Speech acoustics

keyword
https://www.readbyqxmd.com/read/28339140/acquisition-of-voice-onset-time-in-toddlers-at-high-and-low-risk-for-autism-spectrum-disorder
#1
Karen Chenausky, Helen Tager-Flusberg
Although language delay is common in autism spectrum disorder (ASD), research is equivocal on whether speech development is affected. We used acoustic methods to investigate the existence of sub-perceptual differences in the speech of toddlers who developed ASD. Development of the distinction between b and p was prospectively tracked in 22 toddlers at low risk for ASD (LRC), 22 at high risk for ASD without ASD (HRA-), and 11 at high risk for ASD who were diagnosed with ASD at 36 months (HRA+). Voice onset time (VOT), the main acoustic difference between b and p, was measured from spontaneously produced words at 18, 24, and 36 months...
March 24, 2017: Autism Research: Official Journal of the International Society for Autism Research
https://www.readbyqxmd.com/read/28338496/infants-and-adults-use-of-temporal-cues-in-consonant-discrimination
#2
Laurianne Cabrera, Lynne Werner
OBJECTIVES: Adults can use slow temporal envelope cues, or amplitude modulation (AM), to identify speech sounds in quiet. Faster AM cues and the temporal fine structure, or frequency modulation (FM), play a more important role in noise. This study assessed whether fast and slow temporal modulation cues play a similar role in infants' speech perception by comparing the ability of normal-hearing 3-month-olds and adults to use slow temporal envelope cues in discriminating consonants contrasts...
March 23, 2017: Ear and Hearing
https://www.readbyqxmd.com/read/28334352/the-human-neural-alpha-response-to-speech-is-a-proxy-of-attentional-control
#3
Malte Wöstmann, Sung-Joo Lim, Jonas Obleser
Human alpha (~10 Hz) oscillatory power is a prominent neural marker of cognitive effort. When listeners attempt to process and retain acoustically degraded speech, alpha power enhances. It is unclear whether these alpha modulations reflect the degree of acoustic degradation per se or the degradation-driven demand to a listener's attentional control. Using an irrelevant-speech paradigm and measuring the electroencephalogram (EEG), the current experiment demonstrates that the neural alpha response to speech is a surprisingly clear proxy of top-down control, entirely driven by the listening goals of attending versus ignoring degraded speech...
March 18, 2017: Cerebral Cortex
https://www.readbyqxmd.com/read/28330464/spectral-temporal-eeg-dynamics-of-speech-discrimination-processing-in-infants-during-sleep
#4
Phillip M Gilley, Kristin Uhler, Kaylee Watson, Christine Yoshinaga-Itano
BACKGROUND: Oddball paradigms are frequently used to study auditory discrimination by comparing event-related potential (ERP) responses from a standard, high probability sound and to a deviant, low probability sound. Previous research has established that such paradigms, such as the mismatch response or mismatch negativity, are useful for examining auditory processes in young children and infants across various sleep and attention states. The extent to which oddball ERP responses may reflect subtle discrimination effects, such as speech discrimination, is largely unknown, especially in infants that have not yet acquired speech and language...
March 22, 2017: BMC Neuroscience
https://www.readbyqxmd.com/read/28321179/neurophysiological-and-behavioral-responses-of-mandarin-lexical-tone-processing
#5
Yan H Yu, Valerie L Shafer, Elyse S Sussman
Language experience enhances discrimination of speech contrasts at a behavioral- perceptual level, as well as at a pre-attentive level, as indexed by event-related potential (ERP) mismatch negativity (MMN) responses. The enhanced sensitivity could be the result of changes in acoustic resolution and/or long-term memory representations of the relevant information in the auditory cortex. To examine these possibilities, we used a short (ca. 600 ms) vs. long (ca. 2,600 ms) interstimulus interval (ISI) in a passive, oddball discrimination task while obtaining ERPs...
2017: Frontiers in Neuroscience
https://www.readbyqxmd.com/read/28320669/regularized-speaker-adaptation-of-kl-hmm-for-dysarthric-speech-recognition
#6
Myungjong Kim, Younggwan Kim, Joohong Yoo, Jun Wang, Hoirin Kim
This paper addresses the problem of recognizing the speech uttered by patients with dysarthria, which is a motor speech disorder impeding the physical production of speech. Patients with dysarthria have articulatory limitation, and therefore, they often have trouble in pronouncing certain sounds, resulting in undesirable phonetic variation. Modern automatic speech recognition systems designed for regular speakers are ineffective for dysarthric sufferers due to the phonetic variation. To capture the phonetic variation, Kullback-Leibler divergence based hidden Markov model (KL-HMM) is adopted, where the emission probability of state is parametrized by a categorical distribution using phoneme posterior probabilities obtained from a deep neural network-based acoustic model...
March 13, 2017: IEEE Transactions on Neural Systems and Rehabilitation Engineering
https://www.readbyqxmd.com/read/28314241/characterizing-articulation-in-apraxic-speech-using-real-time-magnetic-resonance-imaging
#7
Christina Hagedorn, Michael Proctor, Louis Goldstein, Stephen M Wilson, Bruce Miller, Maria Luisa Gorno-Tempini, Shrikanth S Narayanan
Purpose: Real-time magnetic resonance imaging (MRI) and accompanying analytical methods are shown to capture and quantify salient aspects of apraxic speech, substantiating and expanding upon evidence provided by clinical observation and acoustic and kinematic data. Analysis of apraxic speech errors within a dynamic systems framework is provided and the nature of pathomechanisms of apraxic speech discussed. Method: One adult male speaker with apraxia of speech was imaged using real-time MRI while producing spontaneous speech, repeated naming tasks, and self-paced repetition of word pairs designed to elicit speech errors...
March 17, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/28303412/auditory-enhancement-in-cochlear-implant-users-under-simultaneous-and-forward-masking
#8
Heather A Kreft, Andrew J Oxenham
Auditory enhancement is the phenomenon whereby the salience or detectability of a target sound within a masker is enhanced by the prior presentation of the masker alone. Enhancement has been demonstrated using both simultaneous and forward masking in normal-hearing listeners and may play an important role in auditory and speech perception within complex and time-varying acoustic environments. The few studies of enhancement in hearing-impaired listeners have reported reduced or absent enhancement effects under forward masking, suggesting a potentially peripheral locus of the effect...
March 16, 2017: Journal of the Association for Research in Otolaryngology: JARO
https://www.readbyqxmd.com/read/28303288/-technical-advancements-in-cochlear-implants-state-of-the-art
#9
A Büchner, L Gärtner
Twenty years ago, cochlear implants (CI) were indicated only in cases of profound hearing loss or complete deafness. While from today's perspective the technology was clumsy and provided patients with only limited speech comprehension in quiet scenarios, successive advances in CI technology and the consequent substantial hearing improvements over time have since then resulted in continuous relaxation of indication criteria toward residual hearing. While achievements in implant and processor electronics have been one key factor for the ever-improving hearing performance, development of electro-acoustic CI systems-together with atraumatic implantation concepts-has led to enormous improvements in patients with low-frequency residual hearing...
March 16, 2017: HNO
https://www.readbyqxmd.com/read/28301854/pre-low-raising-in-japanese-pitch-accent
#10
Albert Lee, Santitham Prom-On, Yi Xu
Japanese has been observed to have 2 versions of the H tone, the higher of which is associated with an accented mora. However, the distinction of these 2 versions only surfaces in context but not in isolation, leading to a long-standing debate over whether there is 1 H tone or 2. This article reports evidence that the higher version may result from a pre-low raising mechanism rather than being inherently higher. The evidence is based on an analysis of F0 of words that varied in length, accent condition and syllable structure, produced by native speakers of Japanese at 2 speech rates...
March 17, 2017: Phonetica
https://www.readbyqxmd.com/read/28301392/combined-electric-and-acoustic-stimulation-with-hearing-preservation-effect-of-cochlear-implant-low-frequency-cutoff-on-speech-understanding-and-perceived-listening-difficulty
#11
René H Gifford, Timothy J Davis, Linsey W Sunderhaus, Christine Menapace, Barbara Buck, Jillian Crosson, Lori O'Neill, Anne Beiter, Phil Segel
OBJECTIVE: The primary objective of this study was to assess the effect of electric and acoustic overlap for speech understanding in typical listening conditions using semidiffuse noise. DESIGN: This study used a within-subjects, repeated measures design including 11 experienced adult implant recipients (13 ears) with functional residual hearing in the implanted and nonimplanted ear. The aided acoustic bandwidth was fixed and the low-frequency cutoff for the cochlear implant (CI) was varied systematically...
March 15, 2017: Ear and Hearing
https://www.readbyqxmd.com/read/28301390/pre-and-postoperative-binaural-unmasking-for-bimodal-cochlear-implant-listeners
#12
Benjamin M Sheffield, Gerald Schuchman, Joshua G W Bernstein
OBJECTIVES: Cochlear implants (CIs) are increasingly recommended to individuals with residual bilateral acoustic hearing. Although new hearing-preserving electrode designs and surgical approaches show great promise, CI recipients are still at risk to lose acoustic hearing in the implanted ear, which could prevent the ability to take advantage of binaural unmasking to aid speech recognition in noise. This study examined the tradeoff between the benefits of a CI for speech understanding in noise and the potential loss of binaural unmasking for CI recipients with some bilateral preoperative acoustic hearing...
March 15, 2017: Ear and Hearing
https://www.readbyqxmd.com/read/28300957/dosage-dependent-effect-of-high-resistance-straw-exercise-in-dysphonic-and-non-dysphonic-women
#13
Sabrina Mazzer Paes, Mara Behlau
Purpose: to study the dosage dependent effect of high-resistance straw exercise in women with behavioral dysphonia and in vocally healthy women. Methods: 25 dysphonic women (DG), with average age of 35 years (SD = 10.5) and 30 vocally healthy women (VHG), with average age of 31.6 years (SD = 10.3). The participants produced a continuous sound into a thin high-resistance straw for seven minutes, being interrupted after the first, third, fifth and seventh minute. At each interval, speech samples were recorded (sustained vowel and counting up to 20) and subsequently acoustically analyzed...
March 9, 2017: CoDAS
https://www.readbyqxmd.com/read/28292666/hierarchical-organization-in-the-temporal-structure-of-infant-direct-speech-and-song
#14
Simone Falk, Christopher T Kello
Caregivers alter the temporal structure of their utterances when talking and singing to infants compared with adult communication. The present study tested whether temporal variability in infant-directed registers serves to emphasize the hierarchical temporal structure of speech. Fifteen German-speaking mothers sang a play song and told a story to their 6-months-old infants, or to an adult. Recordings were analyzed using a recently developed method that determines the degree of nested clustering of temporal events in speech...
March 11, 2017: Cognition
https://www.readbyqxmd.com/read/28292001/-implantable-bone-conduction-and-active-middle-ear-devices
#15
Markus Pirlich, Andreas Dietz, Sylvia Meuret, Mathias Hofer
In case of audiological and/or anatomical limitations in the provision of conventional hearing aids, semi- or fully-implantable hearing systems represent a modern therapy alternative. These hearing systems are divided according to their mode of action into active middle ear implants when stimulating the auditory ossicles or the round window, into bone conduction devices while stimulating the skull directly, into cochlear implants with direct acoustic stimulation to the cochlea with its auditory nerve and finally into auditory brainstem implants by bridging the peripheral auditory structures...
February 2017: Laryngo- Rhino- Otologie
https://www.readbyqxmd.com/read/28291832/electrophysiological-and-hemodynamic-mismatch-responses-in-rats-listening-to-human-speech-syllables
#16
Mahdi Mahmoudzadeh, Ghislaine Dehaene-Lambertz, Fabrice Wallois
Speech is a complex auditory stimulus which is processed according to several time-scales. Whereas consonant discrimination is required to resolve rapid acoustic events, voice perception relies on slower cues. Humans, right from preterm ages, are particularly efficient to encode temporal cues. To compare the capacities of preterms to those observed in other mammals, we tested anesthetized adult rats by using exactly the same paradigm as that used in preterm neonates. We simultaneously recorded neural (using ECoG) and hemodynamic responses (using fNIRS) to series of human speech syllables and investigated the brain response to a change of consonant (ba vs...
2017: PloS One
https://www.readbyqxmd.com/read/28290243/motif-discovery-in-speech-application-to-monitoring-alzheimer-s-disease
#17
Peter Garrard, Vanda Nemes, Dragana Nikolic, Anna Barney
BACKGROUND: Perseveration - repetition of words, phrases or questions in speech - is commonly described in Alzheimer's disease (AD). Measuring perseveration is difficult, but may index cognitive performance, aiding diagnosis and disease monitoring. Continuous recording of speech would produce a large quantity of data requiring painstaking manual analysis, and risk violating patients' and others' privacy. A secure record and an automated approach to analysis are required. OBJECTIVES: To record bone-conducted acoustic energy fluctuations from a subject's vocal apparatus using an accelerometer, to describe the recording and analysis stages in detail, and demonstrate that the approach is feasible in AD...
March 9, 2017: Current Alzheimer Research
https://www.readbyqxmd.com/read/28287041/noise-disturbance-in-open-plan-study-environments-a-field-study-on-noise-sources-student-tasks-and-room-acoustic-parameters
#18
Ella Braat-Eggen, Anne van Heijst, Maarten Hornikx, Armin Kohlrausch
The aim of this study is to gain more insight in the assessment of noise in open-plan study environments and to reveal correlations between noise disturbance experienced by students and the noise sources they perceive, the tasks they perform and the acoustic parameters of the open-plan study environment they work in. Data were collected in five open-plan study environments at universities in the Netherlands. A questionnaire was used to investigate student tasks, perceived sound sources and their perceived disturbance, and sound measurements were performed to determine the room acoustic parameters...
March 13, 2017: Ergonomics
https://www.readbyqxmd.com/read/28284736/contributions-of-sensory-tuning-to-auditory-vocal-interactions-in-marmoset-auditory-cortex
#19
Steven J Eliades, Xiaoqin Wang
During speech, humans continuously listen to their own vocal output to ensure accurate communication. Such self-monitoring is thought to require the integration of information about the feedback of vocal acoustics with internal motor control signals. The neural mechanism of this auditory-vocal interaction remains largely unknown at the cellular level. Previous studies in naturally vocalizing marmosets have demonstrated diverse neural activities in auditory cortex during vocalization, dominated by a vocalization-induced suppression of neural firing...
March 8, 2017: Hearing Research
https://www.readbyqxmd.com/read/28281035/acoustic-context-alters-vowel-categorization-in-perception-of-noise-vocoded-speech
#20
Christian E Stilp
Normal-hearing listeners' speech perception is widely influenced by spectral contrast effects (SCEs), where perception of a given sound is biased away from stable spectral properties of preceding sounds. Despite this influence, it is not clear how these contrast effects affect speech perception for cochlear implant (CI) users whose spectral resolution is notoriously poor. This knowledge is important for understanding how CIs might better encode key spectral properties of the listening environment. Here, SCEs were measured in normal-hearing listeners using noise-vocoded speech to simulate poor spectral resolution...
March 9, 2017: Journal of the Association for Research in Otolaryngology: JARO
keyword
keyword
59607
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"