Read by QxMD icon Read

speech signal processing

Lars Meyer
Neural oscillations subserve a broad range of functions in speech processing and language comprehension. On the one hand, speech contains-somewhat-repetitive trains of air pressure bursts that occur at three dominant amplitude modulation frequencies, physically marking the linguistically meaningful progressions of phonemes, syllables, and intonational phrase boundaries. To these acoustic events, neural oscillations of isomorphous operating frequencies are thought to synchronize, presumably resulting in an implicit temporal alignment of periods of neural excitability to linguistically meaningful spectral information on the three low-level linguistic description levels...
October 20, 2017: European Journal of Neuroscience
Davide Nardo, Rachel Holland, Alexander P Leff, Cathy J Price, Jennifer T Crinion
Previous research with aphasic patients has shown that picture naming can be facilitated by concurrent phonemic cueing [e.g. initial phoneme(s) of the word that the patient is trying to retrieve], both as an immediate word retrieval technique, and when practiced repeatedly over time as a long-term anomia treatment. Here, to investigate the neural mechanisms supporting word retrieval, we adopted-for the first time-a functional magnetic resonance imaging task using the same naming procedure as it occurs during the anomia treatment process...
September 27, 2017: Brain: a Journal of Neurology
Xiangbin Teng, Xing Tian, Keith Doelling, David Poeppel
Parsing continuous acoustic streams into perceptual units is fundamental to auditory perception. Previous studies have uncovered a cortical entrainment mechanism in the delta and theta bands (~1-8 Hz) that correlates with formation of perceptual units in speech, music, and other quasi-rhythmic stimuli. Whether cortical oscillations in the delta-theta bands are passively entrained by regular acoustic patterns or play an active role in parsing the acoustic stream is debated. Here we investigate cortical oscillations using novel stimuli with 1/f modulation spectra...
October 17, 2017: European Journal of Neuroscience
Yingyue Xu, Maxin Chen, Petrina LaFaire, Xiaodong Tan, Claus-Peter Richter
Envelope (E) and temporal fine structure (TFS) are important features of acoustic signals and their corresponding perceptual function has been investigated with various listening tasks. To further understand the underlying neural processing of TFS, experiments in humans and animals were conducted to demonstrate the effects of modifying the TFS in natural speech sentences on both speech recognition and neural coding. The TFS of natural speech sentences was modified by distorting the phase and maintaining the magnitude...
October 17, 2017: Scientific Reports
Saransh Jain, Vipin Ghosh P G
OBJECTIVE: Cochlear implants process the acoustic speech signal and convert it into electrical impulses. During this processing, many parameters contribute to speech perception. The available literature reviewed the effect of manipulating one or two such parameters on speech intelligibility, but multiple parameters are seldom manipulated. METHOD: Acoustic parameters, including pulse rate, number of channels, 'n of m', number of electrodes, and channel spacing, were manipulated in acoustic simulations of cochlear implant hearing and 90 different combinations were created...
October 16, 2017: Cochlear Implants International
Jean-Paul Noel, Marisa Lytle, Carissa Cascio, Mark T Wallace
In addition to deficits in social communication, individuals diagnosed with Autism Spectrum Disorder (ASD) frequently exhibit changes in sensory and multisensory function. Recent evidence has focused on changes in audiovisual temporal processing, and has sought to relate these sensory-based changes to weaknesses in social communication. These changes in audiovisual temporal function manifest as differences in the temporal epoch or "window" within which paired auditory and visual stimuli are integrated or bound, with those with ASD exhibiting expanded audiovisual temporal binding windows (TBWs)...
October 14, 2017: Autism Research: Official Journal of the International Society for Autism Research
Angela N C Fulbright, Colleen G Le Prell, Scott K Griffiths, Edward Lobarinas
Noise exposure that causes a temporary threshold shift but no permanent threshold shift can cause degeneration of synaptic ribbons and afferent nerve fibers, with a corresponding reduction in wave I amplitude of the auditory brainstem response (ABR) in animals. This form of underlying damage, hypothesized to also occur in humans, has been termed synaptopathy , and it has been hypothesized that there will be a hidden hearing loss consisting of functional deficits at suprathreshold stimulus levels. This study assessed whether recreational noise exposure history was associated with smaller ABR wave I amplitude and poorer performance on suprathreshold auditory test measures...
November 2017: Seminars in Hearing
Christopher T Kello, Simone Dalla Bella, Butovens Médé, Ramesh Balasubramaniam
Humans talk, sing and play music. Some species of birds and whales sing long and complex songs. All these behaviours and sounds exhibit hierarchical structure-syllables and notes are positioned within words and musical phrases, words and motives in sentences and musical phrases, and so on. We developed a new method to measure and compare hierarchical temporal structures in speech, song and music. The method identifies temporal events as peaks in the sound amplitude envelope, and quantifies event clustering across a range of timescales using Allan factor (AF) variance...
October 2017: Journal of the Royal Society, Interface
Marc A Brennan, Dawna Lewis, Ryan McCreery, Judy Kopun, Joshua M Alexander
BACKGROUND: Nonlinear frequency compression (NFC) can improve the audibility of high-frequency sounds by lowering them to a frequency where audibility is better; however, this lowering results in spectral distortion. Consequently, performance is a combination of the effects of increased access to high-frequency sounds and the detrimental effects of spectral distortion. Previous work has demonstrated positive benefits of NFC on speech recognition when NFC is set to improve audibility while minimizing distortion...
October 2017: Journal of the American Academy of Audiology
Benjamin J Kirby, Judy G Kopun, Meredith Spratford, Clairissa M Mollak, Marc A Brennan, Ryan W McCreery
BACKGROUND: Sloping hearing loss imposes limits on audibility for high-frequency sounds in many hearing aid users. Signal processing algorithms that shift high-frequency sounds to lower frequencies have been introduced in hearing aids to address this challenge by improving audibility of high-frequency sounds. PURPOSE: This study examined speech perception performance, listening effort, and subjective sound quality ratings with conventional hearing aid processing and a new frequency-lowering signal processing strategy called frequency composition (FC) in adults and children...
October 2017: Journal of the American Academy of Audiology
Kiriana Meha-Bettison, Mridula Sharma, Ronny K Ibrahim, Pragati Rao Mandikal Vasuki
OBJECTIVE: The current research investigated whether professional musicians outperformed non-musicians on auditory processing and speech-in-noise perception as assessed using behavioural and electrophysiological tasks. DESIGN: Spectro-temporal processing skills were assessed using a psychoacoustic test battery. Speech-in-noise perception was measured using the Listening in Spatialised Noise - Sentences (LiSN-S) test and Cortical Auditory Evoked Potentials (CAEPs) recorded to the speech syllable/da/presented in quiet and in 8-talker babble noise at 0, 5, and 10 dB signal-to-noise ratios (SNRs)...
October 3, 2017: International Journal of Audiology
Gangyi Feng, Zhenzhong Gan, Suiping Wang, Patrick C M Wong, Bharath Chandrasekaran
A significant neural challenge in speech perception includes extracting discrete phonetic categories from continuous and multidimensional signals despite varying task demands and surface-acoustic variability. While neural representations of speech categories have been previously identified in frontal and posterior temporal-parietal regions, the task dependency and dimensional specificity of these neural representations are still unclear. Here, we asked native Mandarin participants to listen to speech syllables carrying 4 distinct lexical tone categories across passive listening, repetition, and categorization tasks while they underwent functional magnetic resonance imaging (fMRI)...
August 28, 2017: Cerebral Cortex
Ansar Uddin Ahmmed
OBJECTIVES: To compare the sensitivity and specificity of Auditory Figure Ground sub-tests of the SCAN-3 battery, using signal to noise ratio (SNR) of +8 dB (AFG+8) and 0 dB (AFG0), in identifying auditory processing disorder (APD). A secondary objective was to evaluate any difference in auditory processing (AP) between children with symptoms of inattention versus combined sub-types of Attention Deficit Hyperactivity Disorder (ADHD). METHODS: Data from 201 children, aged 6 to 16 years (mean: 10 years 6 months, SD: 2 years 8 months), who were assessed for suspected APD were reviewed retrospectively...
October 2017: International Journal of Pediatric Otorhinolaryngology
Fei Chen, Dingchang Zheng, Yu Tsao
Vocoder simulation studies have suggested that the carrier signal type employed affects the intelligibility of vocoded speech. The present work further assessed how carrier signal type interacts with additional signal processing, namely, single-channel noise suppression and envelope dynamic range compression, in determining the intelligibility of vocoder simulations. In Experiment 1, Mandarin sentences that had been corrupted by speech spectrum-shaped noise (SSN) or two-talker babble (2TB) were processed by one of four single-channel noise-suppression algorithms before undergoing tone-vocoded (TV) or noise-vocoded (NV) processing...
September 2017: Journal of the Acoustical Society of America
Susan R S Bissmeyer, Raymond L Goldsworthy
Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues...
September 2017: Journal of the Acoustical Society of America
Esther Schoenmaker, Sarinah Sutojo, Steven van de Par
The better ear of a listener is the ear that benefits most from head shadow effects in a setting with spatially separated sources. Traditionally, the better ear is considered to be the ear that receives a signal at the best signal-to-noise ratio. For a speech target in interfering speech, the concept of rating the better ear based on glimpses was explored. The laterality of the expected better ear was shown to be well represented by metrics based on glimpsing. When employing better-ear glimpsing as a microscopic predictor for speech intelligibility, a strong relation was found between the amount of glimpsed target speech received by the better ear and the performance on a consonant recognition task...
September 2017: Journal of the Acoustical Society of America
Rebecca Custead, Hyuntaek Oh, Yingying Wang, Steven Barlow
Processing dynamic tactile inputs is a primary function of the somatosensory system. Spatial velocity encoding mechanisms by the nervous system are important for skilled movement production and may play a role in recovery of sensorimotor function following neurological insult. Little is known about tactile velocity encoding in mechanosensory trigeminal networks required for speech, suck, mastication, and facial gesture. High resolution functional magnetic resonance imaging (fMRI) was used to investigate the neural substrates of velocity encoding in the human orofacial somatosensory system during unilateral saltatory pneumotactile stimulation of perioral and buccal hairy skin in 20 neurotypical adults...
December 15, 2017: Brain Research
Duha G Ahmed, Sebastian Paquette, Anthony Zeitouni, Alexandre Lehmann
Cochlear implants (CIs) partially restore the sense of hearing in the deaf. However, the ability to recognize emotions in speech and music is reduced due to the implant's electrical signal limitations and the patient's altered neural pathways. Electrophysiological correlations of these limitations are not yet well established. Here we aimed to characterize the effect of CIs on auditory emotion processing and, for the first time, directly compare vocal and musical emotion processing through a CI-simulator. We recorded 16 normal hearing participants' electroencephalographic activity while listening to vocal and musical emotional bursts in their original form and in a degraded (CI-simulated) condition...
September 1, 2017: Clinical EEG and Neuroscience: Official Journal of the EEG and Clinical Neuroscience Society (ENCS)
Cynthia R Hunter, David B Pisoni
OBJECTIVES: Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences...
September 22, 2017: Ear and Hearing
Jonathan E Peelle
Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures...
September 21, 2017: Ear and Hearing
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"