keyword
MENU ▼
Read by QxMD icon Read
search

speech signal processing

keyword
https://www.readbyqxmd.com/read/28207576/relationship-between-peripheral-and-psychophysical-measures-of-amplitude-modulation-detection-in-cochlear-implant-users
#1
Viral D Tejani, Paul J Abbas, Carolyn J Brown
OBJECTIVE: This study investigates the relationship between electrophysiological and psychophysical measures of amplitude modulation (AM) detection. Prior studies have reported both measures of AM detection recorded separately from cochlear implant (CI) users and acutely deafened animals, but no study has made both measures in the same CI users. Animal studies suggest a progressive loss of high-frequency encoding as one ascends the auditory pathway from the auditory nerve to the cortex...
February 15, 2017: Ear and Hearing
https://www.readbyqxmd.com/read/28195525/observing-others-speak-or-sing-activates-spt-and-neighboring-parietal-cortex
#2
Daniele Corbo, Guy A Orban
To obtain further evidence that action observation can serve as a proxy for action execution and planning in posterior parietal cortex, we scanned participants while they were (1) observing two classes of action: vocal communication and oral manipulation, which share the same effector but differ in nature, and (2) rehearsing and listening to nonsense sentences to localize area Spt, thought to be involved in audio-motor transformation during speech. Using this localizer, we found that Spt is specifically activated by vocal communication, indicating that Spt is not only involved in planning speech but also in observing vocal communication actions...
February 14, 2017: Journal of Cognitive Neuroscience
https://www.readbyqxmd.com/read/28188912/characterization-of-neural-entrainment-to-speech-with-and-without-slow-spectral-energy-fluctuations-in-laminar-recordings-in-monkey-a1
#3
Benedikt Zoefel, Jordi Costa-Faidella, Peter Lakatos, Charles E Schroeder, Rufin VanRullen
Neural entrainment, the alignment between neural oscillations and rhythmic stimulation, is omnipresent in current theories of speech processing - nevertheless, the underlying neural mechanisms are still largely unknown. Here, we hypothesized that laminar recordings in non-human primates provide us with important insight into these mechanisms, in particular with respect to processing in cortical layers. We presented one monkey with human everyday speech sounds and recorded neural (as current-source density, CSD) oscillations in primary auditory cortex (A1)...
February 7, 2017: NeuroImage
https://www.readbyqxmd.com/read/28174545/categorization-of-natural-whistled-vowels-by-na%C3%A3-ve-listeners-of-different-language-background
#4
Julien Meyer, Laure Dentel, Fanny Meunier
Whistled speech in a non-tonal language consists of the natural emulation of vocalic and consonantal qualities in a simple modulated whistled signal. This special speech register represents a natural telecommunication system that enables high levels of sentence intelligibility by trained speakers and is not directly intelligible to naïve listeners. Yet, it is easily learned by speakers of the language that is being whistled, as attested by the current efforts of the revitalization of whistled Spanish in the Canary Islands...
2017: Frontiers in Psychology
https://www.readbyqxmd.com/read/28163663/micropower-mixed-signal-vlsi-independent-component-analysis-for-gradient-flow-acoustic-source-separation
#5
Milutin Stanaćević, Shuo Li, Gert Cauwenberghs
A parallel micro-power mixed-signal VLSI implementation of independent component analysis (ICA) with reconfigurable outer-product learning rules is presented. With the gradient sensing of the acoustic field over a miniature microphone array as a pre-processing method, the proposed ICA implementation can separate and localize up to 3 sources in mild reverberant environment. The ICA processor is implemented in 0.5 µm CMOS technology and occupies 3 mm × 3 mm area. At 16 kHz sampling rate, ASIC consumes 195 µW power from a 3 V supply...
July 2016: IEEE Transactions on Circuits and Systems. I, Regular Papers
https://www.readbyqxmd.com/read/28147604/large-region-acoustic-source-mapping-using-a-movable-array-and-sparse-covariance-fitting
#6
Shengkui Zhao, Cagdas Tuna, Thi Ngoc Tho Nguyen, Douglas L Jones
Large-region acoustic source mapping is important for city-scale noise monitoring. Approaches using a single-position measurement scheme to scan large regions using small arrays cannot provide clean acoustic source maps, while deploying large arrays spanning the entire region of interest is prohibitively expensive. A multiple-position measurement scheme is applied to scan large regions at multiple spatial positions using a movable array of small size. Based on the multiple-position measurement scheme, a sparse-constrained multiple-position vectorized covariance matrix fitting approach is presented...
January 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28144622/the-effects-of-pitch-shifts-on-delay-induced-changes-in-vocal-sequencing-in-a-songbird
#7
MacKenzie Wyatt, Emily A Berthiaume, Conor W Kelly, Samuel J Sober
Like human speech, vocal behavior in songbirds depends critically on auditory feedback. In both humans and songbirds, vocal skills are acquired by a process of imitation whereby current vocal production is compared to an acoustic target. Similarly, performance in adulthood relies strongly on auditory feedback, and online manipulations of auditory signals can dramatically alter acoustic production even after vocalizations have been well learned. Artificially delaying auditory feedback can disrupt both speech and birdsong, and internal delays in auditory feedback have been hypothesized as a cause of vocal dysfluency in persons who stutter...
January 2017: ENeuro
https://www.readbyqxmd.com/read/28139959/inside-speech-multisensory-and-modality-specific-processing-of-tongue-and-lip-speech-actions
#8
Avril Treille, Coriandre Vilain, Thomas Hueber, Laurent Lamalle, Marc Sato
Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both "audible" and visible...
March 2017: Journal of Cognitive Neuroscience
https://www.readbyqxmd.com/read/28129059/speaking-style-influences-the-brain-s-electrophysiological-response-to-grammatical-errors-in-speech-comprehension
#9
Malte C Viebahn, Mirjam Ernestus, James M McQueen
This electrophysiological study asked whether the brain processes grammatical gender violations in casual speech differently than in careful speech. Native speakers of Dutch were presented with utterances that contained adjective-noun pairs in which the adjective was either correctly inflected with a word-final schwa (e.g., een spannende roman, "a suspenseful novel") or incorrectly uninflected without that schwa (een spannend roman). Consistent with previous findings, the uninflected adjectives elicited an electrical brain response sensitive to syntactic violations when the talker was speaking in a careful manner...
January 27, 2017: Journal of Cognitive Neuroscience
https://www.readbyqxmd.com/read/28125444/objective-identification-of-simulated-cochlear-implant-settings-in-normal-hearing-listeners-via-auditory-cortical-evoked-potentials
#10
Sungmin Lee, Gavin M Bidelman
OBJECTIVES: Providing cochlear implant (CI) patients the optimal signal processing settings during mapping sessions is critical for facilitating their speech perception. Here, we aimed to evaluate whether auditory cortical event-related potentials (ERPs) could be used to objectively determine optimal CI parameters. DESIGN: While recording neuroelectric potentials, we presented a set of acoustically vocoded consonants (aKa, aSHa, and aNa) to normal-hearing listeners (n = 12) that simulated speech tokens processed through four different combinations of CI stimulation rate and number of spectral maxima...
January 25, 2017: Ear and Hearing
https://www.readbyqxmd.com/read/28119400/dynamic-encoding-of-acoustic-features-in-neural-responses-to-continuous-speech
#11
Bahar Khalighinejad, Guilherme Cruzatto da Silva, Nima Mesgarani
: Humans are unique in their ability to communicate using spoken language. However, it remains unclear how the speech signal is transformed and represented in the brain at different stages of the auditory pathway. In this study, we characterized electroencephalography (EEG) responses to continuous speech by obtaining the time-locked responses to phoneme instances (phoneme-related potential, PRP). We showed that responses to different phoneme categories are organized by phonetic features...
January 24, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/28114676/a-near-infrared-spectroscopy-study-on-cortical-hemodynamic-responses-to-normal-and-whispered-speech-in-3-to-7-year-old-children
#12
Gerard B Remijn, Mitsuru Kikuchi, Yuko Yoshimura, Kiyomi Shitamichi, Sanae Ueno, Tsunehisa Tsubokawa, Haruyuki Kojima, Haruhiro Higashida, Yoshio Minabe
Purpose: The purpose of this study was to assess cortical hemodynamic response patterns in 3- to 7-year-old children listening to two speech modes: normally vocalized and whispered speech. Understanding whispered speech requires processing of the relatively weak, noisy signal, as well as the cognitive ability to understand the speaker's reason for whispering. Method: Near-infrared spectroscopy (NIRS) was used to assess changes in cortical oxygenated hemoglobin from 16 typically developing children...
January 18, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/28113606/online-training-of-an-opto-electronic-reservoir-computer-applied-to-real-time-channel-equalization
#13
Piotr Antonik, Francois Duport, Michiel Hermans, Anteo Smerieri, Marc Haelterman, Serge Massar
Reservoir computing is a bioinspired computing paradigm for processing time-dependent signals. The performance of its analog implementation is comparable to other state-of-the-art algorithms for tasks such as speech recognition or chaotic time series prediction, but these are often constrained by the offline training methods commonly employed. Here, we investigated the online learning approach by training an optoelectronic reservoir computer using a simple gradient descent algorithm, programmed on a field-programmable gate array chip...
August 26, 2016: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28113304/a-deep-denoising-autoencoder-approach-to-improving-the-intelligibility-of-vocoded-speech-in-cochlear-implant-simulation
#14
Ying-Hui Lai, Fei Chen, Syu-Siang Wang, Xugang Lu, Yu Tsao, Chin-Hui Lee
OBJECTIVE: In a cochlear implant (CI) speech processor, noise reduction (NR) is a critical component for enabling CI users to attain improved speech perception under noisy conditions. Identifying an effective NR approach has long been a key topic in CI research. METHOD: Recently, a deep denoising autoencoder (DDAE) based NR approach was proposed and shown to be effec-tive in restoring clean speech from noisy observations. It was also shown that DDAE could provide better performance than several existing NR methods in standardized objective evaluations...
September 27, 2016: IEEE Transactions on Bio-medical Engineering
https://www.readbyqxmd.com/read/28112440/attentional-modulation-and-domain-specificity-underlying-the-neural-organization-of-auditory-categorical-perception
#15
Gavin M Bidelman, Breya Walker
Categorical perception (CP) is highly evident in audition when listeners' perception of speech sounds abruptly shifts identity despite equidistant changes in stimulus acoustics. While CP is an inherent property of speech perception, how (if) it is expressed in other auditory modalities (e.g., music) is less clear. Moreover, prior neuroimaging studies have been equivocal on whether attentional engagement is necessary for the brain to categorically organize sound. To address these questions, we recorded neuroelectric brain responses (ERPs) from listeners as they rapidly categorized sounds along a speech and music continuum (active task) or during passive listening...
January 23, 2017: European Journal of Neuroscience
https://www.readbyqxmd.com/read/28112001/comparison-of-single-microphone-noise-reduction-schemes-can-hearing-impaired-listeners-tell-the-difference
#16
Rainer Huber, Thomas Bisitz, Timo Gerkmann, Jürgen Kiessling, Hartmut Meister, Birger Kollmeier
OBJECTIVE: The perceived qualities of nine different single-microphone noise reduction (SMNR) algorithms were to be evaluated and compared in subjective listening tests with normal hearing and hearing impaired (HI) listeners. DESIGN: Speech samples added with traffic noise or with party noise were processed by the SMNR algorithms. Subjects rated the amount of speech distortions, intrusiveness of background noise, listening effort and overall quality, using a simplified MUSHRA (ITU-R, 2003 ) assessment method...
January 23, 2017: International Journal of Audiology
https://www.readbyqxmd.com/read/28103192/audio-visual-speaker-diarization-based-on-spatiotemporal-bayesian-fusion
#17
Israel Gebru, Sileye Ba, Xiaofei Li, Radu Horaud
Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem...
January 5, 2017: IEEE Transactions on Pattern Analysis and Machine Intelligence
https://www.readbyqxmd.com/read/28100843/pathophysiology-and-molecular-basis-of-selected-metabolic-abnormalities-in-huntington-s-disease
#18
Jolanta Krzysztoń-Russjan
Huntington's disease (HD) is an incurable, devastating neurodegenerative disease with a known genetic background and autosomally dominant inheritance pattern. HTT gene mutation (mHTT) is associated with polymorphic fragment elongation above 35 repeats of the CAG triplet. The mHTT product is an altered protein with a poly-Q elongated fragment, with the highest expression determined in the central nervous system (CNS) and with differentiated expression outside the CNS. A drastic loss of striatal and deeper layers of the cerebral cortex neurons was determined in the CNS, but muscle and body weight mass loss with dysfunction of many organs was also observed...
December 30, 2016: Postȩpy Higieny i Medycyny Doświadczalnej
https://www.readbyqxmd.com/read/28068353/speech-timing-deficit-of-stuttering-evidence-from-contingent-negative-variations
#19
Ning Ning, Danling Peng, Xiangping Liu, Shuang Yang
The aim of the present study was to investigate the speech preparation processes of adults who stutter (AWS). Fifteen AWS and fifteen adults with fluent speech (AFS) participated in the experiment. The event-related potentials (ERPs) were recorded in a foreperiod paradigm. The warning signal (S1) was a color square, and the following imperative stimulus (S2) was either a white square (the Go signal that required participants to name the color of S1) or a white dot (the NoGo signal that prevents participants from speaking)...
2017: PloS One
https://www.readbyqxmd.com/read/28054908/binaural-interference-and-the-effects-of-age-and-hearing-loss
#20
Bruna S S Mussoi, Ruth A Bentler
BACKGROUND: The existence of binaural interference, defined here as poorer speech recognition with both ears than with the better ear alone, is well documented. Studies have suggested that its prevalence may be higher in the elderly population. However, no study to date has explored binaural interference in groups of younger and older adults in conditions that favor binaural processing (i.e., in spatially separated noise). Also, the effects of hearing loss have not been studied. PURPOSE: To examine binaural interference through speech perception tests, in groups of younger adults with normal hearing, older adults with normal hearing for their age, and older adults with hearing loss...
January 2017: Journal of the American Academy of Audiology
keyword
keyword
48120
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"