keyword
MENU ▼
Read by QxMD icon Read
search

speech signal processing

keyword
https://www.readbyqxmd.com/read/28068353/speech-timing-deficit-of-stuttering-evidence-from-contingent-negative-variations
#1
Ning Ning, Danling Peng, Xiangping Liu, Shuang Yang
The aim of the present study was to investigate the speech preparation processes of adults who stutter (AWS). Fifteen AWS and fifteen adults with fluent speech (AFS) participated in the experiment. The event-related potentials (ERPs) were recorded in a foreperiod paradigm. The warning signal (S1) was a color square, and the following imperative stimulus (S2) was either a white square (the Go signal that required participants to name the color of S1) or a white dot (the NoGo signal that prevents participants from speaking)...
2017: PloS One
https://www.readbyqxmd.com/read/28054908/binaural-interference-and-the-effects-of-age-and-hearing-loss
#2
Bruna S S Mussoi, Ruth A Bentler
BACKGROUND: The existence of binaural interference, defined here as poorer speech recognition with both ears than with the better ear alone, is well documented. Studies have suggested that its prevalence may be higher in the elderly population. However, no study to date has explored binaural interference in groups of younger and older adults in conditions that favor binaural processing (i.e., in spatially separated noise). Also, the effects of hearing loss have not been studied. PURPOSE: To examine binaural interference through speech perception tests, in groups of younger adults with normal hearing, older adults with normal hearing for their age, and older adults with hearing loss...
January 2017: Journal of the American Academy of Audiology
https://www.readbyqxmd.com/read/28045787/some-neurocognitive-correlates-of-noise-vocoded-speech-perception-in-children-with-normal-hearing-a-replication-and-extension-of-eisenberg-et-al-2002
#3
Adrienne S Roman, David B Pisoni, William G Kronenberger, Kathleen F Faulkner
OBJECTIVES: Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition...
December 27, 2016: Ear and Hearing
https://www.readbyqxmd.com/read/28040002/occlusion-effect-on-compensatory-formant-production-and-voice-amplitude-in-response-to-real-time-perturbation
#4
Takashi Mitsuya, David W Purcell
The importance of auditory feedback for controlling speech articulation has been substantiated by the use of the real-time auditory perturbation paradigm. With this paradigm, speakers receive their own manipulated voice signal in real-time while they produce a simple speech segment. In response, they spontaneously compensate for the manipulation. In the case of vowel formant control, various studies have reported behavioral and neural mechanisms of how auditory feedback is processed for compensatory behavior...
December 2016: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28031398/principles-of-auditory-processing-differ-between-sensory-and-premotor-structures-of-the-songbird-forebrain
#5
Efe Soyman, David S Vicario
Sensory and motor brain structures work in collaboration during perception. To evaluate their respective contributions, the present study recorded neural responses to auditory stimulation at multiple sites simultaneously in both the higher-order auditory area NCM and premotor area HVC of the songbird brain in awake zebra finches (Taeniopygia guttata). Bird's own song (BOS) and various conspecific songs (CON) were presented in both blocked and shuffled sequences. Neural responses showed plasticity in the form of stimulus specific adaptation with markedly different dynamics between the two structures...
December 28, 2016: Journal of Neurophysiology
https://www.readbyqxmd.com/read/27991470/air-traffic-controllers-long-term-speech-in-noise-training-effects-a-control-group-study
#6
Maria T P Zaballos, Daniel P Plasencia, María L Z González, Angel R de Miguel, Ángel R Macías
INTRODUCTION: Speech perception in noise relies on the capacity of the auditory system to process complex sounds using sensory and cognitive skills. The possibility that these can be trained during adulthood is of special interest in auditory disorders, where speech in noise perception becomes compromised. Air traffic controllers (ATC) are constantly exposed to radio communication, a situation that seems to produce auditory learning. The objective of this study has been to quantify this effect...
November 2016: Noise & Health
https://www.readbyqxmd.com/read/27942372/transient-noise-reduction-in-cochlear-implant-users-a-multi-band-approach
#7
Karl-Heinz Dyballa, Phillipp Hehrmann, Volkmar Hamacher, Thomas Lenarz, Andreas Buechner
A previously-tested transient noise reduction (TNR) algorithm for cochlear implant (CI) users was modified to detect and attenuate transients independently across multiple frequency-bands. Since speech and transient noise are often spectrally distinct, we hypothesized that benefits in speech intelligibility can be achieved over the earlier single-band design. Fifteen experienced CI users (49 to 72 years) were tested unilaterally using pre-processed stimuli delivered directly to a speech processor. Speech intelligibility in transient and soft stationary noise, subjective sound quality and the recognition of warning signals was investigated in three processing conditions: no TNR (TNRoff), single-band TNR (TNRsgl) and multi-band TNR (TNRmult)...
August 23, 2016: Audiology Research
https://www.readbyqxmd.com/read/27921268/audiovisual-sentence-recognition-not-predicted-by-susceptibility-to-the-mcgurk-effect
#8
Kristin J Van Engen, Zilong Xie, Bharath Chandrasekaran
In noisy situations, visual information plays a critical role in the success of speech communication: listeners are better able to understand speech when they can see the speaker. Visual influence on auditory speech perception is also observed in the McGurk effect, in which discrepant visual information alters listeners' auditory perception of a spoken syllable. When hearing /ba/ while seeing a person saying /ga/, for example, listeners may report hearing /da/. Because these two phenomena have been assumed to arise from a common integration mechanism, the McGurk effect has often been used as a measure of audiovisual integration in speech perception...
December 5, 2016: Attention, Perception & Psychophysics
https://www.readbyqxmd.com/read/27913315/speech-enhancement-based-on-neural-networks-improves-speech-intelligibility-in-noise-for-cochlear-implant-users
#9
Tobias Goehring, Federico Bolner, Jessica J M Monaghan, Bas van Dijk, Andrzej Zarowski, Stefan Bleeck
Speech understanding in noisy environments is still one of the major challenges for cochlear implant (CI) users in everyday life. We evaluated a speech enhancement algorithm based on neural networks (NNSE) for improving speech intelligibility in noise for CI users. The algorithm decomposes the noisy speech signal into time-frequency units, extracts a set of auditory-inspired features and feeds them to the neural network to produce an estimation of which frequency channels contain more perceptually important information (higher signal-to-noise ratio, SNR)...
November 30, 2016: Hearing Research
https://www.readbyqxmd.com/read/27908047/assessing-the-perceptual-contributions-of-level-dependent-segments-to-sentence-intelligibility
#10
Tian Guan, Guang-Xing Chu, Yu Tsao, Fei Chen
The present work assessed the contributions of high root-mean-square (RMS) level (H-level, containing primarily vowels) and middle-RMS-level (M-level, with mostly consonants and vowel-consonant transitions) segments to the intelligibility of noise-masked and noise-suppressed sentences. In experiment 1, noise-masked (by speech-spectrum shaped noise and 6-talker babble) Mandarin sentences were edited to preserve only H- or M-level segments, while replacing the non-target segments with silence. In experiment 2, Mandarin sentences were subjected to four commonly-used single-channel noise-suppression algorithms before generating H-level-only and M-level-only noise-suppressed sentences...
November 2016: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/27904958/feasibility-of-an-implanted-microphone-for-cochlear-implant-listening
#11
Jean-Marc Gérard, Laurent Demanez, Caroline Salmon, Filiep Vanpoucke, Joris Walraevens, Anke Plasmans, Daniele De Siati, Philippe Lefèbvre
This study aimed at evaluating the feasibility of an implanted microphone for cochlear implants (CI) by comparison of hearing outcomes, sound quality and patient satisfaction of a subcutaneous microphone to a standard external microphone of a behind-the-ear sound processor. In this prospective feasibility study with a within-subject repeated measures design comparing the microphone modalities, ten experienced adult unilateral CI users received an implantable contralateral subcutaneous microphone attached to a percutaneous plug...
November 30, 2016: European Archives of Oto-rhino-laryngology
https://www.readbyqxmd.com/read/27894891/source-analysis-of-auditory-steady-state-responses-in-acoustic-and-electric-hearing
#12
Robert Luke, Astrid De Vos, Jan Wouters
Speech is a complex signal containing a broad variety of acoustic information. For accurate speech reception, the listener must perceive modulations over a range of envelope frequencies. Perception of these modulations is particularly important for cochlear implant (CI) users, as all commercial devices use envelope coding strategies. Prolonged deafness affects the auditory pathway. However, little is known of how cochlear implantation affects the neural processing of modulated stimuli. This study investigates and contrasts the neural processing of envelope rate modulated signals in acoustic and CI listeners...
November 25, 2016: NeuroImage
https://www.readbyqxmd.com/read/27894376/brain-substrates-underlying-auditory-speech-priming-in-healthy-listeners-and-listeners-with-schizophrenia
#13
C Wu, Y Zheng, J Li, H Wu, S She, S Liu, Y Ning, L Li
BACKGROUND: Under 'cocktail party' listening conditions, healthy listeners and listeners with schizophrenia can use temporally pre-presented auditory speech-priming (ASP) stimuli to improve target-speech recognition, even though listeners with schizophrenia are more vulnerable to informational speech masking. METHOD: Using functional magnetic resonance imaging, this study searched for both brain substrates underlying the unmasking effect of ASP in 16 healthy controls and 22 patients with schizophrenia, and brain substrates underlying schizophrenia-related speech-recognition deficits under speech-masking conditions...
November 29, 2016: Psychological Medicine
https://www.readbyqxmd.com/read/27891665/spectral-summation-and-facilitation-in-on-and-off-responses-for-optimized-representation-of-communication-calls-in-mouse-inferior-colliculus
#14
Alexander G Akimov, Marina A Egorova, Günter Ehret
Selectivity for processing of species-specific vocalizations and communication sounds has often been associated with the auditory cortex. The midbrain inferior colliculus, however, is the first center in the auditory pathways of mammals integrating acoustic information processed in separate nuclei and channels in the brainstem and, therefore, could significantly contribute to enhance the perception of species' communication sounds. Here, we used natural wriggling calls of mouse pups, which communicate need for maternal care to adult females, and further 15 synthesized sounds to test the hypothesis that neurons in the central nucleus of the inferior colliculus of adult females optimize their response rates for reproduction of the three main harmonics (formants) of wriggling calls...
November 27, 2016: European Journal of Neuroscience
https://www.readbyqxmd.com/read/27885978/factors-affecting-daily-cochlear-implant-use-in-children-datalogging-evidence
#15
Vijayalakshmi Easwar, Joseph Sanfilippo, Blake Papsin, Karen Gordon
BACKGROUND: Children with profound hearing loss can gain access to sound through cochlear implants (CIs), but these devices must be worn consistently to promote auditory development. Although subjective parent reports have identified several factors limiting long-term CI use in children, it is also important to understand the day-to-day issues which may preclude consistent device use. In the present study, objective measures gathered through datalogging software were used to quantify the following in children: (1) number of hours of CI use per day, (2) practical concerns including repeated disconnections between the external transmission coil and the internal device (termed "coil-offs"), and (3) listening environments experienced during daily use...
November 2016: Journal of the American Academy of Audiology
https://www.readbyqxmd.com/read/27884866/a-case-of-specific-language-impairment-in-a-deaf-signer-of-american-sign-language
#16
David Quinto-Pozos, Jenny L Singleton, Peter C Hauser
This article describes the case of a deaf native signer of American Sign Language (ASL) with a specific language impairment (SLI). School records documented normal cognitive development but atypical language development. Data include school records; interviews with the child, his mother, and school professionals; ASL and English evaluations; and a comprehensive neuropsychological and psychoeducational evaluation, and they span an approximate period of 7.5 years (11;10-19;6) including scores from school records (11;10-16;5) and a 3...
November 23, 2016: Journal of Deaf Studies and Deaf Education
https://www.readbyqxmd.com/read/27877144/rhythm-on-your-lips
#17
Marcela Peña, Alan Langus, César Gutiérrez, Daniela Huepe-Artigas, Marina Nespor
The Iambic-Trochaic Law (ITL) accounts for speech rhythm, grouping of sounds as either Iambs-if alternating in duration-or Trochees-if alternating in pitch and/or intensity. The two different rhythms signal word order, one of the basic syntactic properties of language. We investigated the extent to which Iambic and Trochaic phrases could be auditorily and visually recognized, when visual stimuli engage lip reading. Our results show both rhythmic patterns were recognized from both, auditory and visual stimuli, suggesting that speech rhythm has a multimodal representation...
2016: Frontiers in Psychology
https://www.readbyqxmd.com/read/27875590/spatio-temporal-progression-of-cortical-activity-related-to-continuous-overt-and-covert-speech-production-in-a-reading-task
#18
Jonathan S Brumberg, Dean J Krusienski, Shreya Chakrabarti, Aysegul Gunduz, Peter Brunner, Anthony L Ritaccio, Gerwin Schalk
How the human brain plans, executes, and monitors continuous and fluent speech has remained largely elusive. For example, previous research has defined the cortical locations most important for different aspects of speech function, but has not yet yielded a definition of the temporal progression of involvement of those locations as speech progresses either overtly or covertly. In this paper, we uncovered the spatio-temporal evolution of neuronal population-level activity related to continuous overt speech, and identified those locations that shared activity characteristics across overt and covert speech...
2016: PloS One
https://www.readbyqxmd.com/read/27866186/consequences-of-stimulus-type-on-higher-order-processing-in-single-sided-deaf-cochlear-implant-users
#19
Mareike Finke, Pascale Sandmann, Hanna Bönitz, Andrej Kral, Andreas Büchner
Single-sided deaf subjects with a cochlear implant (CI) provide the unique opportunity to compare central auditory processing of the electrical input (CI ear) and the acoustic input (normal-hearing, NH, ear) within the same individual. In these individuals, sensory processing differs between their two ears, while cognitive abilities are the same irrespectively of the sensory input. To better understand perceptual-cognitive factors modulating speech intelligibility with a CI, this electroencephalography study examined the central-auditory processing of words, the cognitive abilities, and the speech intelligibility in 10 postlingually single-sided deaf CI users...
November 19, 2016: Audiology & Neuro-otology
https://www.readbyqxmd.com/read/27859274/configurations-of-time-the-body-and-verbal-communication-temporality-in-patients-who-express-their-suffering-through-the-body
#20
José Eduardo Fischbein
This paper focuses on the study of temporality used as a clinical pointer to processes of affect regulation in patients who express their suffering through a discourse driven by bodily allusions. Differences between symptoms revealed by body language that conveys an experience of conflict (psychoneurotic symptoms) and somatizations are reviewed. Somatization is examined as a benchmark for the failure to resolve states of tension. The body in the session is conceptualized as a speech event. The body is considered as a psychical construction organized in the exchanges with a fellow human-being...
November 17, 2016: International Journal of Psycho-analysis
keyword
keyword
48120
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"