keyword
MENU ▼
Read by QxMD icon Read
search

Speech enhancement

keyword
https://www.readbyqxmd.com/read/27921115/-speech-audiometry-for-indication-of-conventional-and-implantable-hearing-aids
#1
U Hoppe, A Hast
The social function of the human hearing apparatus is comprehension of speech. Auditory rehabilitation aims to enhance speech perception in everyday life. Consequently, audiological evaluation contains speech perception measurement. Many speech audiometric methods have been developed in German-speaking countries, which are suitable for quantification of speech perception abilities in quiet and in noise to address specific diagnostic questions. For establishment of the indication for technical hearing systems such as hearing aids and cochlear implants, the Freiburg monosyllabic test has been employed successfully for many years...
December 5, 2016: HNO
https://www.readbyqxmd.com/read/27918063/long-lasting-musical-training-modifies-language-processing-a-dichotic-fused-word-test-study
#2
L Sebastiani, E Castellani
Musical training modifies neural areas associated with both music and language and enhances speech perception and discrimination by engaging the right hemisphere regions classically associated with music processing. On these bases we hypothesized that participants with extended musical training could have reduced left-hemisphere dominance for speech. In order to verify this hypothesis, two groups of right-handed individuals, one with long-term musical training and one with no musical training, participated to a Dichotic Fused Word Test consisting in the simultaneous presentation of different pairs of rhyming words and pseudo-words, one to the left ear and one to the right one...
January 1, 2016: Archives Italiennes de Biologie
https://www.readbyqxmd.com/read/27914434/long-term-musical-experience-and-auditory-and-visual-perceptual-abilities-under-adverse-conditions
#3
Esperanza M Anaya, David B Pisoni, William G Kronenberger
Musicians have been shown to have enhanced speech perception in noise skills. It is unclear whether these improvements are limited to the auditory modality, as no research has examined musicians' visual perceptual abilities under degraded conditions. The current study examined associations between long-term musical experience and visual perception under noisy or degraded conditions. The performance of 11 musicians and 11 age-matched nonmusicians was compared on several auditory and visual perceptions in noise measures...
September 2016: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/27913315/speech-enhancement-based-on-neural-networks-improves-speech-intelligibility-in-noise-for-cochlear-implant-users
#4
Tobias Goehring, Federico Bolner, Jessica J M Monaghan, Bas van Dijk, Andrzej Zarowski, Stefan Bleeck
Speech understanding in noisy environments is still one of the major challenges for cochlear implant (CI) users in everyday life. We evaluated a speech enhancement algorithm based on neural networks (NNSE) for improving speech intelligibility in noise for CI users. The algorithm decomposes the noisy speech signal into time-frequency units, extracts a set of auditory-inspired features and feeds them to the neural network to produce an estimation of which frequency channels contain more perceptually important information (higher signal-to-noise ratio, SNR)...
November 29, 2016: Hearing Research
https://www.readbyqxmd.com/read/27908052/visual-tactile-integration-in-speech-perception-evidence-for-modality-neutral-speech-primitives
#5
Katie Bicevskis, Donald Derrick, Bryan Gick
Audio-visual [McGurk and MacDonald (1976). Nature 264, 746-748] and audio-tactile [Gick and Derrick (2009). Nature 462(7272), 502-504] speech stimuli enhance speech perception over audio stimuli alone. In addition, multimodal speech stimuli form an asymmetric window of integration that is consistent with the relative speeds of the various signals [Munhall, Gribble, Sacco, and Ward (1996). Percept. Psychophys. 58(3), 351-362; Gick, Ikegami, and Derrick (2010). J. Acoust. Soc. Am. 128(5), EL342-EL346]. In this experiment, participants were presented video of faces producing /pa/ and /ba/ syllables, both alone and with air puffs occurring synchronously and at different timings up to 300 ms before and after the stop release...
November 2016: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/27904958/feasibility-of-an-implanted-microphone-for-cochlear-implant-listening
#6
Jean-Marc Gérard, Laurent Demanez, Caroline Salmon, Filiep Vanpoucke, Joris Walraevens, Anke Plasmans, Daniele De Siati, Philippe Lefèbvre
This study aimed at evaluating the feasibility of an implanted microphone for cochlear implants (CI) by comparison of hearing outcomes, sound quality and patient satisfaction of a subcutaneous microphone to a standard external microphone of a behind-the-ear sound processor. In this prospective feasibility study with a within-subject repeated measures design comparing the microphone modalities, ten experienced adult unilateral CI users received an implantable contralateral subcutaneous microphone attached to a percutaneous plug...
November 30, 2016: European Archives of Oto-rhino-laryngology
https://www.readbyqxmd.com/read/27904816/an-unusual-presentation-of-nocardiosis-in-an-allogeneic-transplant-recipient
#7
Uroosa Ibrahim, Amina Saqib, Farhan Mohammad, Terenig Terjanian
Nocardiosis is a rare cause of opportunistic infection post hematopoietic stem cell transplant (HSCT) occurring in about 0.3% of patients. The risk factors include delayed immune reconstitution, prolonged neutropenia, and graft-versus-host disease. The most common site of infection is the lung, followed by the brain and the skin. Concomitant pulmonary and central nervous system (CNS) nocardiosis is an extremely rare entity as presented in our case. We present the case of a 72-year-old male at 137 days post transplant presenting with complaints of headache and slurred speech...
October 17, 2016: Curēus
https://www.readbyqxmd.com/read/27894376/brain-substrates-underlying-auditory-speech-priming-in-healthy-listeners-and-listeners-with-schizophrenia
#8
C Wu, Y Zheng, J Li, H Wu, S She, S Liu, Y Ning, L Li
BACKGROUND: Under 'cocktail party' listening conditions, healthy listeners and listeners with schizophrenia can use temporally pre-presented auditory speech-priming (ASP) stimuli to improve target-speech recognition, even though listeners with schizophrenia are more vulnerable to informational speech masking. METHOD: Using functional magnetic resonance imaging, this study searched for both brain substrates underlying the unmasking effect of ASP in 16 healthy controls and 22 patients with schizophrenia, and brain substrates underlying schizophrenia-related speech-recognition deficits under speech-masking conditions...
November 29, 2016: Psychological Medicine
https://www.readbyqxmd.com/read/27891665/spectral-summation-and-facilitation-in-on-and-off-responses-for-optimized-representation-of-communication-calls-in-mouse-inferior-colliculus
#9
Alexander G Akimov, Marina A Egorova, Günter Ehret
Selectivity for processing of species-specific vocalizations and communication sounds has often been associated with the auditory cortex. The midbrain inferior colliculus, however, is the first center in the auditory pathways of mammals integrating acoustic information processed in separate nuclei and channels in the brainstem and, therefore, could significantly contribute to enhance the perception of species' communication sounds. Here, we used natural wriggling calls of mouse pups, which communicate need for maternal care to adult females, and further 15 synthesized sounds to test the hypothesis that neurons in the central nucleus of the inferior colliculus of adult females optimize their response rates for reproduction of the three main harmonics (formants) of wriggling calls...
November 27, 2016: European Journal of Neuroscience
https://www.readbyqxmd.com/read/27891084/differential-effects-of-visual-acoustic-biofeedback-intervention-for-residual-speech-errors
#10
Tara McAllister Byun, Heather Campbell
Recent evidence suggests that the incorporation of visual biofeedback technologies may enhance response to treatment in individuals with residual speech errors. However, there is a need for controlled research systematically comparing biofeedback versus non-biofeedback intervention approaches. This study implemented a single-subject experimental design with a crossover component to investigate the relative efficacy of visual-acoustic biofeedback and traditional articulatory treatment for residual rhotic errors...
2016: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/27880065/fluctuations-in-the-emotional-intelligence-of-therapy-students-during-clinical-placements-implication-for-educators-supervisors-and-students
#11
Nigel Gribble, Richard K Ladyshewsky, Richard Parsons
This study investigated the changes in emotional intelligence (EI) of occupational therapy, physiotherapy, and speech pathology students (therapy students). Clinical placements have multiple benefits including the development of interprofessional skills, enhancing practice skills and interpersonal skills. Higher EI competencies have been shown to have a positive impact on patient outcomes, teamwork skills, dealing with stress, and patient satisfaction. Data for this study were collected at two time points: before third-year therapy students commenced extended clinical placements (T1 with 261 students) and approximately 7 months later after students had completed one or more clinical placements (T2 with 109 students)...
November 23, 2016: Journal of Interprofessional Care
https://www.readbyqxmd.com/read/27876531/patient-centered-quality-of-life-measures-after-alloplastic-temporomandibular-joint-replacement-surgery
#12
X Alakailly, D Schwartz, N Alwanni, C Demko, M A Altay, Y Kilinc, D A Baur, F Quereshy
The purpose of this study was to evaluate patient-reported outcome measures of quality of life (QoL) for patients with end-stage temporomandibular joint (TMJ) disease who have undergone TMJ prosthetic replacement. The records of 36 patients who had undergone alloplastic total joint replacement procedures were analyzed. Patients were treated using either TMJ Concepts or Biomet/Lorenz prosthetics. Patients were asked to complete a 12-item TMJ-S-QoL survey, which encompassed questions pertaining to pain, speech, chewing function, and various aspects of social life and mental health...
November 19, 2016: International Journal of Oral and Maxillofacial Surgery
https://www.readbyqxmd.com/read/27867090/visual-cortex-responses-reflect-temporal-structure-of-continuous-quasi-rhythmic-sensory-stimulation
#13
Christian Keitel, Gregor Thut, Joachim Gross
Neural processing of dynamic continuous visual input, and cognitive influences thereon, are frequently studied in paradigms employing strictly rhythmic stimulation. However, the temporal structure of natural stimuli is hardly ever fully rhythmic but possesses certain spectral bandwidths (e.g. lip movements in speech, gestures). Examining periodic brain responses elicited by strictly rhythmic stimulation might thus represent ideal, yet isolated cases. Here, we tested how the visual system reflects quasi-rhythmic stimulation with frequencies continuously varying within ranges of classical theta (4-7Hz), alpha (8-13Hz) and beta bands (14-20Hz) using EEG...
November 17, 2016: NeuroImage
https://www.readbyqxmd.com/read/27864051/individual-differences-in-speech-in-noise-perception-parallel-neural-speech-processing-and-attention-in-preschoolers
#14
Elaine C Thompson, Kali Woodruff Carr, Travis White-Schwoch, Sebastian Otto-Meyer, Nina Kraus
From bustling classrooms to unruly lunchrooms, school settings are noisy. To learn effectively in the unwelcome company of numerous distractions, children must clearly perceive speech in noise. In older children and adults, speech-in-noise perception is supported by sensory and cognitive processes, but the correlates underlying this critical listening skill in young children (3-5 year olds) remain undetermined. Employing a longitudinal design (two evaluations separated by ∼12 months), we followed a cohort of 59 preschoolers, ages 3...
November 15, 2016: Hearing Research
https://www.readbyqxmd.com/read/27852738/contributions-of-rapid-neuromuscular-transmission-to-the-fine-control-of-acoustic-parameters-of-birdsong
#15
Caitlin Mencio, Balagurunathan Kuberan, Franz Goller
Neural control of complex vocal behaviors, such as birdsong and speech, requires integration of biomechanical nonlinearities through muscular output. Although control of airflow and tension of vibrating tissues are known functions of vocal muscles, it remains unclear how specific muscle characteristics contribute to specific acoustic parameters. To address this gap, we removed heparan sulfate chains using heparitinases to subtly perturb neuromuscular transmission in the syrinx of adult male zebra finches (Taeniopygia guttata)...
November 16, 2016: Journal of Neurophysiology
https://www.readbyqxmd.com/read/27846209/prediction-errors-but-not-sharpened-signals-simulate-multivoxel-fmri-patterns-during-speech-perception
#16
Helen Blank, Matthew H Davis
Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further...
November 2016: PLoS Biology
https://www.readbyqxmd.com/read/27837259/auditory-spatial-representations-of-the-world-are-compressed-in-blind-humans
#17
Andrew J Kolarik, Shahina Pardhan, Silvia Cirstea, Brian C J Moore
Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals...
November 11, 2016: Experimental Brain Research. Experimentelle Hirnforschung. Expérimentation Cérébrale
https://www.readbyqxmd.com/read/27833037/atypical-neural-synchronization-to-speech-envelope-modulations-in-dyslexia
#18
Astrid De Vos, Sophie Vanvooren, Jolijn Vanderauwera, Pol Ghesquière, Jan Wouters
A fundamental deficit in the synchronization of neural oscillations to temporal information in speech could underlie phonological processing problems in dyslexia. In this study, the hypothesis of a neural synchronization impairment is investigated more specifically as a function of different neural oscillatory bands and temporal information rates in speech. Auditory steady-state responses to 4, 10, 20 and 40Hz modulations were recorded in normal reading and dyslexic adolescents to measure neural synchronization of theta, alpha, beta and low-gamma oscillations to syllabic and phonemic rate information...
November 7, 2016: Brain and Language
https://www.readbyqxmd.com/read/27826235/acoustic-detail-but-not-predictability-of-task-irrelevant-speech-disrupts-working-memory
#19
Malte Wöstmann, Jonas Obleser
Attended speech is comprehended better not only if more acoustic detail is available, but also if it is semantically highly predictable. But can more acoustic detail or higher predictability turn into disadvantages and distract a listener if the speech signal is to be ignored? Also, does the degree of distraction increase for older listeners who typically show a decline in attentional control ability? Adopting the irrelevant-speech paradigm, we tested whether younger (age 23-33 years) and older (60-78 years) listeners' working memory for the serial order of spoken digits would be disrupted by the presentation of task-irrelevant speech varying in its acoustic detail (using noise-vocoding) and its semantic predictability (of sentence endings)...
2016: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/27810647/investigating-the-feasibility-of-using-transcranial-direct-current-stimulation-to-enhance-fluency-in-people-who-stutter
#20
Jennifer Chesters, Kate E Watkins, Riikka Möttönen
Developmental stuttering is a disorder of speech fluency affecting 1% of the adult population. Long-term reductions in stuttering are difficult for adults to achieve with behavioural therapies. We investigated whether a single session of transcranial direct current stimulation (TDCS) could improve fluency in people who stutter (PWS). In separate sessions, either anodal TDCS (1mA for 20min) or sham stimulation was applied over the left inferior frontal cortex while PWS read sentences aloud. Fluency was induced during the stimulation period by using choral speech, that is, participants read in unison with another speaker...
October 31, 2016: Brain and Language
keyword
keyword
85275
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"