Read by QxMD icon Read

Scene hearing aids

Jing Xia, Buye Xu, Shareka Pentony, Jingjing Xu, Jayaganesh Swaminathan
Many hearing-aid wearers have difficulties understanding speech in reverberant noisy environments. This study evaluated the effects of reverberation and noise on speech recognition in normal-hearing listeners and hearing-impaired listeners wearing hearing aids. Sixteen typical acoustic scenes with different amounts of reverberation and various types of noise maskers were simulated using a loudspeaker array in an anechoic chamber. Results showed that, across all listening conditions, speech intelligibility of aided hearing-impaired listeners was poorer than normal-hearing counterparts...
March 2018: Journal of the Acoustical Society of America
Andrew J Oxenham
Auditory perception is our main gateway to communication with others via speech and music, and it also plays an important role in alerting and orienting us to new events. This review provides an overview of selected topics pertaining to the perception and neural coding of sound, starting with the first stage of filtering in the cochlea and its profound impact on perception. The next topic, pitch, has been debated for millennia, but recent technical and theoretical developments continue to provide us with new insights...
January 4, 2018: Annual Review of Psychology
Marzieh Haghighi, Mohammad Moghadamfalahi, Murat Akcakaya, Barbara G Shinn-Cunningham, Deniz Erdogmus
Recent findings indicate that brain interfaces have the potential to enable attention-guided auditory scene analysis and manipulation in applications, such as hearing aids and augmented/virtual environments. Specifically, noninvasively acquired electroencephalography (EEG) signals have been demonstrated to carry some evidence regarding, which of multiple synchronous speech waveforms the subject attends to. In this paper, we demonstrate that: 1) using data- and model-driven cross-correlation features yield competitive binary auditory attention classification results with at most 20 s of EEG from 16 channels or even a single well-positioned channel; 2) a model calibrated using equal-energy speech waveforms competing for attention could perform well on estimating attention in closed-loop unbalanced-energy speech waveform situations, where the speech amplitudes are modulated by the estimated attention posterior probability distribution; 3) such a model would perform even better if it is corrected (linearly, in this instance) based on EEG evidence dependence on speech weights in the mixture; and 4) calibrating a model based on population EEG could result in acceptable performance for new individuals/users; therefore, EEG-based auditory attention classifiers may generalize across individuals, leading to reduced or eliminated calibration time and effort...
November 2017: IEEE Transactions on Neural Systems and Rehabilitation Engineering
Joshua G W Bernstein, Gerald I Schuchman, Arnaldo L Rivera
BACKGROUND: Cochlear implants (CIs) can improve speech-in-noise performance for listeners with unilateral sensorineural deafness. But these benefits are modest and in most cases are limited to head-shadow advantages, with little evidence of binaural squelch. HYPOTHESIS: The goal of the investigation was to determine whether CI listeners with normal hearing or moderate hearing loss in the contralateral ear would receive a larger head-shadow benefit for target speech and noise originating from opposite sides of the head, and whether listeners would experience binaural squelch in the free field in a test involving interfering talkers...
August 2017: Otology & Neurotology
Sarah Meehan, Elizabeth A Hough, Gemma Crundwell, Rachel Knappett, Mark Smith, David M Baguley
BACKGROUND: Many of the world's population have hearing loss in one ear; current statistics indicate that up to 10% of the population may be affected. Although the detrimental impact of bilateral hearing loss, hearing aids, and cochlear implants upon music appreciation is well recognized, studies on the influence of single-sided deafness (SSD) are sparse. PURPOSE: We sought to investigate whether a single-sided hearing loss can cause problems with music appreciation, despite normal hearing in the other ear...
May 2017: Journal of the American Academy of Audiology
Jace Wolfe, Mila Duke, Erin Schafer, Christine Jones, Lori Rakita
BACKGROUND: Children with hearing loss experience significant difficulty understanding speech in noisy and reverberant situations. Adaptive noise management technologies, such as fully adaptive directional microphones and digital noise reduction, have the potential to improve communication in noise for children with hearing aids. However, there are no published studies evaluating the potential benefits children receive from the use of adaptive noise management technologies in simulated real-world environments as well as in daily situations...
May 2017: Journal of the American Academy of Audiology
Henrik Gert Hassager, Alan Wiinberg, Torsten Dau
This study investigated the effects of fast-acting hearing-aid compression on normal-hearing and hearing-impaired listeners' spatial perception in a reverberant environment. Three compression schemes-independent compression at each ear, linked compression between the two ears, and "spatially ideal" compression operating solely on the dry source signal-were considered using virtualized speech and noise bursts. Listeners indicated the location and extent of their perceived sound images on the horizontal plane...
April 2017: Journal of the Acoustical Society of America
Giso Grimm, Birger Kollmeier, Volker Hohmann
BACKGROUND: Field tests and guided walks in real environments show that the benefit from hearing aid (HA) signal processing in real-life situations is typically lower than the predicted benefit found in laboratory studies. This suggests that laboratory test outcome measures are poor predictors of real-life HA benefits. However, a systematic evaluation of algorithms in the field is difficult due to the lack of reproducibility and control of the test conditions. Virtual acoustic environments that simulate real-life situations may allow for a systematic and reproducible evaluation of HAs under more realistic conditions, thus providing a better estimate of real-life benefit than established laboratory tests...
July 2016: Journal of the American Academy of Audiology
Chris Oreinos, Jörg M Buchholz
BACKGROUND: Assessments of hearing aid (HA) benefits in the laboratory often do not accurately reflect real-life experience. This may be improved by employing loudspeaker-based virtual sound environments (VSEs) that provide more realistic acoustic scenarios. It is unclear how far the limited accuracy of these VSEs influences measures of subjective performance. PURPOSE: Verify two common methods for creating VSEs that are to be used for assessing HA outcomes. RESEARCH DESIGN: A cocktail-party scene was created inside a meeting room and then reproduced with a 41-channel loudspeaker array inside an anechoic chamber...
July 2016: Journal of the American Academy of Audiology
Brent Edwards
Hearing loss and cognitive function interact in both a bottom-up and top-down relationship. Listening effort is tied to these interactions, and models have been developed to explain their relationship. The Ease of Language Understanding model in particular has gained considerable attention in its explanation of the effect of signal distortion on speech understanding. Signal distortion can also affect auditory scene analysis ability, however, resulting in a distorted auditory scene that can affect cognitive function, listening effort, and the allocation of cognitive resources...
July 2016: Ear and Hearing
Brian C J Moore, Thomas Baer, D Timothy Ives, Josephine Marriage, Marina Salorio-Corbetto
OBJECTIVE: To compare loudness and tone-quality ratings for sounds processed via a simulated five-channel compression hearing aid fitted using NAL-NL2 or using a modification of the fitting designed to be appropriate for the type of listening situation: speech in quiet, speech in noise, music, and noise alone. DESIGN: Ratings of loudness and tone quality were obtained for stimuli presented via a loudspeaker in front of the participant. For normal-hearing participants, levels of 50, 65, and 80 dB SPL were used...
July 2016: Ear and Hearing
Renata Coelho Borges, Marcio Holsbach Costa
This work presents a theoretical analysis of the prediction-error method-based adaptive feedback canceller in hearing aid applications. The studied scene takes into account the occlusion effect caused by the partial or complete closing of the ventilation opening. Such a situation may occur in high gain applications to avoid undesired whistling. Deterministic recursive equations and steady-state conditions were derived for the mean weight behaviour of the predictor and the adaptive filter. The expected theoretical predictions were compared to Monte Carlo simulations, showing very accurate agreement...
August 2015: Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society
Konstantinos Kostarakos, Heiner Römer
Communication is fundamental for our understanding of behavior. In the acoustic modality, natural scenes for communication in humans and animals are often very noisy, decreasing the chances for signal detection and discrimination. We investigated the mechanisms enabling selective hearing under natural noisy conditions for auditory receptors and interneurons of an insect. In the studied katydid Mecopoda elongata species-specific calling songs (chirps) are strongly masked by signals of another species, both communicating in sympatry...
July 22, 2015: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
Chris Oreinos, Jörg M Buchholz
Recently, an increased interest has been demonstrated in evaluating hearing aids (HAs) inside controlled, but at the same time, realistic sound environments. A promising candidate that employs loudspeakers for realizing such sound environments is the listener-centered method of higher-order ambisonics (HOA). Although the accuracy of HOA has been widely studied, it remains unclear to what extent the results can be generalized when (1) a listener wearing HAs that may feature multi-microphone directional algorithms is considered inside the reconstructed sound field and (2) reverberant scenes are recorded and reconstructed...
June 2015: Journal of the Acoustical Society of America
Douglas S Brungart, Julie Cohen, Mary Cord, Danielle Zion, Sridhar Kalluri
In the real world, listeners often need to track multiple simultaneous sources in order to maintain awareness of the relevant sounds in their environments. Thus, there is reason to believe that simple single source sound localization tasks may not accurately capture the impact that a listening device such as a hearing aid might have on a listener's level of auditory awareness. In this experiment, 10 normal hearing listeners and 20 hearing impaired listeners were tested in a task that required them to identify and localize sound sources in three different listening tasks of increasing complexity: a single-source localization task, where listeners identified and localized a single sound source presented in isolation; an added source task, where listeners identified and localized a source that was added to an existing auditory scene, and a remove source task, where listeners identified and localized a source that was removed from an existing auditory scene...
October 2014: Journal of the Acoustical Society of America
Clifford A Franklin, Letitia J White, Thomas C Franklin, Laura Smith-Olinde
BACKGROUND: The acceptable noise level (ANL) indicates how much background noise a listener is willing to accept while listening to speech. The clinical impact and application of the ANL measure is as a predictor of hearing-aid use. The ANL may also correlate with the percentage of time spent in different listening environments (i.e., quiet, noisy, noisy with speech present, etc). Information retrieved from data logging could confirm this relationship. Data logging, using sound scene analysis, is a method of monitoring the different characteristics of the listening environments that a hearing-aid user experiences during a period...
June 2014: Journal of the American Academy of Audiology
Giles Hamilton-Fletcher, Jamie Ward
Visual sensory substitution devices (SSDs) allow visually-deprived individuals to navigate and recognise the 'visual world'; SSDs also provide opportunities for psychologists to study modality-independent theories of perception. At present most research has focused on encoding greyscale vision. However at the low spatial resolutions received by SSD users, colour information enhances object-ground segmentation, and provides more stable cues for scene and object recognition. Many attempts have been made to encode colour information in tactile or auditory modalities, but many of these studies exist in isolation...
2013: Multisensory Research
Meital Avivi-Reich, Meredyth Daneman, Bruce A Schneider
Multi-talker conversations challenge the perceptual and cognitive capabilities of older adults and those listening in their second language (L2). In older adults these difficulties could reflect declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. The tendency of L2 listeners to invoke some of the semantic and syntactic processes from their first language (L1) may interfere with speech comprehension in L2. These challenges might also force them to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up vs...
2014: Frontiers in Systems Neuroscience
Dorea R Ruggles, Andrew J Oxenham
The challenges of daily communication require listeners to integrate both independent and complementary auditory information to form holistic auditory scenes. As part of this process listeners are thought to fill in missing information to create continuous perceptual streams, even when parts of messages are masked or obscured. One example of this filling-in process-the auditory continuity illusion-has been studied primarily using stimuli presented in isolation, leaving it unclear whether the illusion occurs in more complex situations with higher perceptual and attentional demands...
June 2014: Journal of Experimental Psychology. Human Perception and Performance
John Berketa, Helen James, Neil Langlois, Lindsay Richards
PURPOSE: Decedents who are severely decomposed, skeletonized or incinerated present challenges for identification. Cochlear implants aid hearing and bear unique serial numbers that can be used to assist with identification of bodies that are not visually identifiable. The purpose of this paper was to highlight companies that have or had manufactured cochlear type implants and demonstrate the appearance of the implants to assist crime scene investigators, pathologists, anthropologists and odontologists...
September 2013: Forensic Science, Medicine, and Pathology
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"