keyword
MENU ▼
Read by QxMD icon Read
search

Speech acoustics

keyword
https://www.readbyqxmd.com/read/29929777/french-society-of-ent-sforl-guidelines-short-version-audiometry-in-adults-and-children
#1
V Favier, C Vincent, É Bizaguet, D Bouccara, R Dauman, B Frachet, F Le Her, C Meyer-Bisch, S Tronche, F Sterkers-Artières, F Venail
INTRODUCTION: French Society of ENT (SFORL) good practice guidelines for audiometric examination in adults and children. METHODS: A multidisciplinary working group performed a review of the scientific literature. Guidelines were drawn up, reviewed by an independent reading group, and finalized in a consensus meeting. RESULTS: Audiometry should be performed in an acoustically controlled environment (<30dBA); audiometer calibration should be regularly checked; and patient-specific masking rules should be systematically applied...
June 18, 2018: European Annals of Otorhinolaryngology, Head and Neck Diseases
https://www.readbyqxmd.com/read/29928195/training-in-temporal-information-processing-ameliorates-phonetic-identification
#2
Aneta Szymaszek, Anna Dacewicz, Paulina Urban, Elzbieta Szelag
Many studies revealed a link between temporal information processing (TIP) in a millisecond range and speech perception. Previous studies indicated a dysfunction in TIP accompanied by deficient phonemic hearing in children with specific language impairment (SLI). In this study we concentrate in SLI on phonetic identification, using the voice-onset-time (VOT) phenomenon in which TIP is built-in. VOT is crucial for speech perception, as stop consonants (like /t/ vs. /d/) may be distinguished by an acoustic difference in time between the onsets of the consonant (stop release burst) and the following vibration of vocal folds (voicing)...
2018: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/29916785/cortical-tracking-of-global-and-local-variations-of-speech-rhythm-during-connected-natural-speech-perception
#3
Anna Maria Alexandrou, Timo Saarinen, Jan Kujala, Riitta Salmelin
During natural speech perception, listeners must track the global speaking rate, that is, the overall rate of incoming linguistic information, as well as transient, local speaking rate variations occurring within the global speaking rate. Here, we address the hypothesis that this tracking mechanism is achieved through coupling of cortical signals to the amplitude envelope of the perceived acoustic speech signals. Cortical signals were recorded with magnetoencephalography (MEG) while participants perceived spontaneously produced speech stimuli at three global speaking rates (slow, normal/habitual, and fast)...
June 19, 2018: Journal of Cognitive Neuroscience
https://www.readbyqxmd.com/read/29912820/hearing-preservation-outcomes-after-cochlear-implantation-depending-on-the-angle-of-insertion-indication-for-electric-or-electric-acoustic-stimulation
#4
Silke Helbig, Youssef Adel, Martin Leinung, Timo Stöver, Uwe Baumann, Tobias Weissgerber
OBJECTIVE: This study reviewed outcomes of hearing preservation (HP) surgery depending on the angle of insertion (AOI) in a cochlear implant (CI) patient population who used electric stimulation (ES) or combined electric-acoustic stimulation (EAS). STUDY DESIGN: Retrospective case review. SETTING: Tertiary referral university hospital. PATIENTS: Ninety-one patients with different degrees of preoperative low-frequency residual hearing who underwent HP surgery with a free-fitting lateral-wall electrode array (MED-EL Flex) with lengths ranging from 20...
June 15, 2018: Otology & Neurotology
https://www.readbyqxmd.com/read/29911176/differences-in-hearing-acuity-among-normal-hearing-young-adults-modulate-the-neural-basis-for-speech-comprehension
#5
Yune S Lee, Arthur Wingfield, Nam-Eun Min, Ethan Kotloff, Murray Grossman, Jonathan E Peelle
In this paper, we investigate how subtle differences in hearing acuity affect the neural systems supporting speech processing in young adults. Auditory sentence comprehension requires perceiving a complex acoustic signal and performing linguistic operations to extract the correct meaning. We used functional MRI to monitor human brain activity while adults aged 18-41 years listened to spoken sentences. The sentences varied in their level of syntactic processing demands, containing either a subject-relative or object-relative center-embedded clause...
May 2018: ENeuro
https://www.readbyqxmd.com/read/29902588/anterior-paracingulate-and-cingulate-cortex-mediates-the-effects-of-cognitive-load-on-speech-sound-discrimination
#6
Silvia P Gennari, Rebecca E Millman, Mark Hymers, Sven L Mattys
Perceiving speech while performing another task is a common challenge in everyday life. How the brain controls resource allocation during speech perception remains poorly understood. Using functional magnetic resonance imaging (fMRI), we investigated the effect of cognitive load on speech perception by examining brain responses of participants performing a phoneme discrimination task and a visual working memory task simultaneously. The visual task involved holding either a single meaningless image in working memory (low cognitive load) or four different images (high cognitive load)...
June 11, 2018: NeuroImage
https://www.readbyqxmd.com/read/29900799/influence-of-multi-microphone-signal-enhancement-algorithms-on-the-acoustics-and-detectability-of-angular-and-radial-source-movements
#7
Micha Lundbeck, Laura Hartog, Giso Grimm, Volker Hohmann, Lars Bramsløw, Tobias Neher
Hearing-impaired listeners are known to have difficulties not only with understanding speech in noise but also with judging source distance and movement, and these deficits are related to perceived handicap. It is possible that the perception of spatially dynamic sounds can be improved with hearing aids (HAs), but so far this has not been investigated. In a previous study, older hearing-impaired listeners showed poorer detectability for virtual left-right (angular) and near-far (radial) source movements due to lateral interfering sounds and reverberation, respectively...
January 2018: Trends in Hearing
https://www.readbyqxmd.com/read/29898053/masking-level-difference-in-schoolchildren-environmental-analysis
#8
Quemile Pribs Martins, Vivian Amaral Faccin, Mirtes Brückmann, Daniela Gil, Michele Vargas Garcia
PURPOSE: To investigate the auditory ability of selective attention in the school population and to identify reference values to the age group from seven to ten years old through the Masking Level Difference Test, and to identify if the parents' schooling, as well as the family income can influence the test results. METHODS: Thirty-one schoolchildren who match the eligibility criteria attended the study, being 20 female and 11 male. An anamnesis was conducted to question the familiar income and the schooling of the children´s parents; we also performed visual inspection of the External Acoustic Meatus, Pure Tone Audiometry, Speech Audiometry, Acoustic Immittance Measures, Dichotic Digits Test and Masking Level Difference test...
June 11, 2018: CoDAS
https://www.readbyqxmd.com/read/29897999/development-of-fricative-sound-perception-in-korean-infants-the-role-of-language-experience-and-infants-initial-sensitivity
#9
Minha Shin, Youngon Choi, Reiko Mazuka
In this paper, we report data on the development of Korean infants' perception of a rare fricative phoneme distinction. Korean fricative consonants have received much interest in the linguistic community due to the language's distinct categorization of sounds. Unlike many fricative contrasts utilized in most of the world's languages, Korean fricatives (/s*/-/s/) are all voiceless. Moreover, compared with other sound categories, fricatives have received very little attention in the speech perception development field and no studies thus far have examined Korean infants' development of native phonology in this domain...
2018: PloS One
https://www.readbyqxmd.com/read/29896086/reduced-performance-during-a-sentence-repetition-task-by-continuous-theta-burst-magnetic-stimulation-of-the-pre-supplementary-motor-area
#10
Susanne Dietrich, Ingo Hertrich, Florian Müller-Dahlhaus, Hermann Ackermann, Paolo Belardinelli, Debora Desideri, Verena C Seibold, Ulf Ziemann
The pre-supplementary motor area (pre-SMA) is engaged in speech comprehension under difficult circumstances such as poor acoustic signal quality or time-critical conditions. Previous studies found that left pre-SMA is activated when subjects listen to accelerated speech. Here, the functional role of pre-SMA was tested for accelerated speech comprehension by inducing a transient "virtual lesion" using continuous theta-burst stimulation (cTBS). Participants were tested (1) prior to (pre-baseline), (2) 10 min after (test condition for the cTBS effect), and (3) 60 min after stimulation (post-baseline) using a sentence repetition task (formant-synthesized at rates of 8, 10, 12, 14, and 16 syllables/s)...
2018: Frontiers in Neuroscience
https://www.readbyqxmd.com/read/29891885/real-life-speech-production-and-perception-have-a-shared-premotor-cortical-substrate
#11
Olga Glanz Iljina, Johanna Derix, Rajbir Kaur, Andreas Schulze-Bonhage, Peter Auer, Ad Aertsen, Tonio Ball
Motor-cognitive accounts assume that the articulatory cortex is involved in language comprehension, but previous studies may have observed such an involvement as an artefact of experimental procedures. Here, we employed electrocorticography (ECoG) during natural, non-experimental behavior combined with electrocortical stimulation mapping to study the neural basis of real-life human verbal communication. We took advantage of ECoG's ability to capture high-gamma activity (70-350 Hz) as a spatially and temporally precise index of cortical activation during unconstrained, naturalistic speech production and perception conditions...
June 11, 2018: Scientific Reports
https://www.readbyqxmd.com/read/29888940/acoustic-foundations-of-the-speech-to-song-illusion
#12
Adam Tierney, Aniruddh D Patel, Mara Breen
In the "speech-to-song illusion," certain spoken phrases are heard as highly song-like when isolated from context and repeated. This phenomenon occurs to a greater degree for some stimuli than for others, suggesting that particular cues prompt listeners to perceive a spoken phrase as song. Here we investigated the nature of these cues across four experiments. In Experiment 1, participants were asked to rate how song-like spoken phrases were after each of eight repetitions. Initial ratings were correlated with the consistency of an underlying beat and within-syllable pitch slope, while rating change was linked to beat consistency, within-syllable pitch slope, and melodic structure...
June 2018: Journal of Experimental Psychology. General
https://www.readbyqxmd.com/read/29880843/recurrent-development-of-song-idiosyncrasy-without-auditory-inputs-in-the-canary-an-open-ended-vocal-learner
#13
Chihiro Mori, Wan-Chun Liu, Kazuhiro Wada
Complex learned behaviors, like bird song and human speech, develop under the influence of both genetic and environmental factors. Accordingly, learned behaviors comprise species specificity and individual variability. Auditory information plays a critical role in vocal learning by songbirds, both to memorize tutor songs and to monitor own vocalizations. Nevertheless, audition-deprived songbirds develop structured, species-specific song patterns. It remains to be elucidated how the auditory input contributes to the development of individual variability of song characteristics...
June 7, 2018: Scientific Reports
https://www.readbyqxmd.com/read/29879142/short-term-adaptation-to-sound-statistics-is-unimpaired-in-developmental-dyslexia
#14
Yafit Gabay, Lori L Holt
Developmental dyslexia is presumed to arise from phonological impairments. Accordingly, people with dyslexia show speech perception deficits taken as indication of impoverished phonological representations. However, the nature of speech perception deficits in those with dyslexia remains elusive. Specifically, there is no agreement as to whether speech perception deficits arise from speech-specific processing impairments, or from general auditory impairments that might be either specific to temporal processing or more general...
2018: PloS One
https://www.readbyqxmd.com/read/29878842/why-are-background-telephone-conversations-distracting
#15
John E Marsh, Robert Ljung, Helena Jahncke, Douglas MacCutcheon, Florian Pausch, Linden J Ball, François Vachon
Telephone conversation is ubiquitous within the office setting. Overhearing a telephone conversation-whereby only one of the two speakers is heard-is subjectively more annoying and objectively more distracting than overhearing a full conversation. The present study sought to determine whether this "halfalogue" effect is attributable to unexpected offsets and onsets within the background speech (acoustic unexpectedness) or to the tendency to predict the unheard part of the conversation (semantic [un]predictability), and whether these effects can be shielded against through top-down cognitive control...
June 2018: Journal of Experimental Psychology. Applied
https://www.readbyqxmd.com/read/29871826/brainstem-cortical-functional-connectivity-for-speech-is-differentially-challenged-by-noise-and-reverberation
#16
Gavin M Bidelman, Mary Katherine Davis, Megan H Pridgen
Everyday speech perception is challenged by external acoustic interferences that hinder verbal communication. Here, we directly compared how different levels of the auditory system (brainstem vs. cortex) code speech and how their neural representations are affected by two acoustic stressors: noise and reverberation. We recorded multichannel (64 ch) brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) simultaneously in normal hearing individuals to speech sounds presented in mild and moderate levels of noise and reverb...
May 26, 2018: Hearing Research
https://www.readbyqxmd.com/read/29867690/emotional-connotations-of-musical-instrument-timbre-in-comparison-with-emotional-speech-prosody-evidence-from-acoustics-and-event-related-potentials
#17
Xiaoluan Liu, Yi Xu, Kai Alter, Jyrki Tuomainen
Music and speech both communicate emotional meanings in addition to their domain-specific contents. But it is not clear whether and how the two kinds of emotional meanings are linked. The present study is focused on exploring the emotional connotations of musical timbre of isolated instrument sounds through the perspective of emotional speech prosody. The stimuli were isolated instrument sounds and emotional speech prosody categorized by listeners into anger, happiness and sadness, respectively. We first analyzed the timbral features of the stimuli, which showed that relations between the three emotions were relatively consistent in those features for speech and music...
2018: Frontiers in Psychology
https://www.readbyqxmd.com/read/29867686/visual-speech-perception-cues-constrain-patterns-of-articulatory-variation-and-sound-change
#18
Jonathan Havenhill, Youngah Do
What are the factors that contribute to (or inhibit) diachronic sound change? While acoustically motivated sound changes are well-documented, research on the articulatory and audiovisual-perceptual aspects of sound change is limited. This paper investigates the interaction of articulatory variation and audiovisual speech perception in the Northern Cities Vowel Shift (NCVS), a pattern of sound change observed in the Great Lakes region of the United States. We focus specifically on the maintenance of the contrast between the vowels /ɑ/ and /ɔ/, both of which are fronted as a result of the NCVS...
2018: Frontiers in Psychology
https://www.readbyqxmd.com/read/29862266/redesign-of-the-hannover-coupler-optimized-vibration-transfer-from-floating-mass-transducer-to-round-window
#19
Mathias Müller, Rolf Salcher, Nils Prenzler, Thomas Lenarz, Hannes Maier
Introduction: In order to reduce the large variations in clinical outcomes of patients with implanted MED-EL Floating Mass Transducer (FMT) at the round window (RW), several approaches were proposed to optimize FMT-RW coupling. Our previous study showed improved FMT-RW coupling by applying static RW loads utilizing the "Hannover Coupler" (HC) FMT-prosthesis but also demonstrated insufficient low frequency performance. Hence, a redesigned HC version (HCv2) was investigated in this study...
2018: BioMed Research International
https://www.readbyqxmd.com/read/29861132/a-spatial-map-of-onset-and-sustained-responses-to-speech-in-the-human-superior-temporal-gyrus
#20
Liberty S Hamilton, Erik Edwards, Edward F Chang
To derive meaning from speech, we must extract multiple dimensions of concurrent information from incoming speech signals. That is, equally important to processing phonetic features is the detection of acoustic cues that give structure and context to the information we hear. How the brain organizes this information is unknown. Using data-driven computational methods on high-density intracranial recordings from 27 human participants, we reveal the functional distinction of neural responses to speech in the posterior superior temporal gyrus according to either onset or sustained response profiles...
May 25, 2018: Current Biology: CB
keyword
keyword
59607
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"