keyword
MENU ▼
Read by QxMD icon Read
search

auditory scene

keyword
https://www.readbyqxmd.com/read/28821680/cortical-representations-of-speech-in-a-multi-talker-auditory-scene
#1
Krishna C Puvvada, Jonathan Z Simon
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically-based representations in the auditory nerve, into perceptually distinct auditory-objects based representation in auditory cortex. Here, using magneto-encephalo-graphy (MEG) recordings from human subjects, both men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of auditory cortex...
August 18, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/28813033/a-vision-based-wayfinding-system-for-visually-impaired-people-using-situation-awareness-and-activity-based-instructions
#2
Eunjeong Ko, Eun Yi Kim
A significant challenge faced by visually impaired people is 'wayfinding', which is the ability to find one's way to a destination in an unfamiliar environment. This study develops a novel wayfinding system for smartphones that can automatically recognize the situation and scene objects in real time. Through analyzing streaming images, the proposed system first classifies the current situation of a user in terms of their location. Next, based on the current situation, only the necessary context objects are found and interpreted using computer vision techniques...
August 16, 2017: Sensors
https://www.readbyqxmd.com/read/28811257/auditory-conflict-and-congruence-in-frontotemporal-dementia
#3
Camilla N Clark, Jennifer M Nicholas, Jennifer L Agustus, Christopher J D Hardy, Lucy L Russell, Emilie V Brotherhood, Katrina M Dick, Charles R Marshall, Catherine J Mummery, Jonathan D Rohrer, Jason D Warren
Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n=19) and semantic dementia (SD; n=10) relative to healthy older individuals (n=20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence...
August 12, 2017: Neuropsychologia
https://www.readbyqxmd.com/read/28792518/rendering-visual-events-as-sounds-spatial-attention-capture-by-auditory-augmented-reality
#4
Scott A Stone, Matthew S Tata
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events...
2017: PloS One
https://www.readbyqxmd.com/read/28764452/modeling-speech-localization-talker-identification-and-word-recognition-in-a-multi-talker-setting
#5
Angela Josupeit, Volker Hohmann
This study introduces a model for solving three different auditory tasks in a multi-talker setting: target localization, target identification, and word recognition. The model was used to simulate psychoacoustic data from a call-sign-based listening test involving multiple spatially separated talkers [Brungart and Simpson (2007). Percept. Psychophys. 69(1), 79-91]. The main characteristics of the model are (i) the extraction of salient auditory features ("glimpses") from the multi-talker signal and (ii) the use of a classification method that finds the best target hypothesis by comparing feature templates from clean target signals to the glimpses derived from the multi-talker mixture...
July 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28757195/selective-entrainment-of-brain-oscillations-drives-auditory-perceptual-organization
#6
Jordi Costa-Faidella, Elyse S Sussman, Carles Escera
Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept...
July 27, 2017: NeuroImage
https://www.readbyqxmd.com/read/28736736/feedback-driven-sensory-mapping-adaptation-for-robust-speech-activity-detection
#7
Ashwin Bellur, Mounya Elhilali
Parsing natural acoustic scenes using computational methodologies poses many challenges. Given the rich and complex nature of the acoustic environment, data mismatch between train and test conditions is a major hurdle in data-driven audio processing systems. In contrast, the brain exhibits a remarkable ability at segmenting acoustic scenes with relative ease. When tackling challenging listening conditions that are often faced in everyday life, the biological system relies on a number of principles that allow it to effortlessly parse its rich soundscape...
March 2017: IEEE/ACM Transactions on Audio, Speech, and Language Processing
https://www.readbyqxmd.com/read/28680950/change-deafness-for-real-spatialized-environmental-scenes
#8
Jeremy Gaston, Kelly Dickerson, Daniel Hipp, Peter Gerhardstein
The everyday auditory environment is complex and dynamic; often, multiple sounds co-occur and compete for a listener's cognitive resources. 'Change deafness', framed as the auditory analog to the well-documented phenomenon of 'change blindness', describes the finding that changes presented within complex environments are often missed. The present study examines a number of stimulus factors that may influence change deafness under real-world listening conditions. Specifically, an AX (same-different) discrimination task was used to examine the effects of both spatial separation over a loudspeaker array and the type of change (sound source additions and removals) on discrimination of changes embedded in complex backgrounds...
2017: Cognitive Research: Principles and Implications
https://www.readbyqxmd.com/read/28666215/just-look-away-gaze-aversions-as-an-overt-attentional-disengagement-mechanism
#9
Dekel Abeles, Shlomit Yuval-Greenberg
During visual exploration of a scene, the eye-gaze tends to be directed toward more salient image-locations, containing more information. However, while performing non-visual tasks, such information-seeking behavior could be detrimental to performance, as the perception of irrelevant but salient visual input may unnecessarily increase the cognitive-load. It would be therefore beneficial if during non-visual tasks, eye-gaze would be governed by a drive to reduce saliency rather than maximize it. The current study examined the phenomenon of gaze-aversion during non-visual tasks, which is hypothesized to act as an active avoidance mechanism...
June 27, 2017: Cognition
https://www.readbyqxmd.com/read/28650721/photographic-memory-the-effects-of-volitional-photo-taking-on-memory-for-visual-and-auditory-aspects-of-an-experience
#10
Alixandra Barasch, Kristin Diehl, Jackie Silverman, Gal Zauberman
How does volitional photo taking affect unaided memory for visual and auditory aspects of experiences? Across one field and three lab studies, we found that, even without revisiting any photos, participants who could freely take photographs during an experience recognized more of what they saw and less of what they heard, compared with those who could not take any photographs. Further, merely taking mental photos had similar effects on memory. These results provide support for the idea that photo taking induces a shift in attention toward visual aspects and away from auditory aspects of an experience...
August 2017: Psychological Science
https://www.readbyqxmd.com/read/28600256/a-graphical-model-for-online-auditory-scene-modulation-using-eeg-evidence-for-attention
#11
Marzieh Haghighi, Mohammad Moghadamfalahi, Murat Akcakaya, Deniz Erdogmus
Recent findings indicate that brain interfaces have the potential to enable attention-guided auditory scene analysis and manipulation in applications such as hearing aids and augmented/ virtual environments. Specifically, noninvasively acquired electroencephalography (EEG) signals have been demonstrated to carry some evidence regarding which of multiple synchronous speech waveforms the subject attends to. In this paper we demonstrate that: (1) using data- and model-driven cross-correlation features yield competitive binary auditory attention classification results with at most 20 seconds of EEG from 16 channels or even a single well-positioned channel; (2) a model calibrated using equal-energy speech waveforms competing for attention could perform well on estimating attention in closed-loop unbalancedenergy speech waveform situations, where the speech amplitudes are modulated by the estimated attention posterior probability distribution; (3) such a model would perform even better if it is corrected (linearly, in this instance) based on EEG evidence dependency on speech weights in the mixture; (4) calibrating a model based on population EEG could result in acceptable performance for new individuals/users; therefore EEG-based auditory attention classifiers may generalize across individuals, leading to reduced or eliminated calibration time and effort...
June 6, 2017: IEEE Transactions on Neural Systems and Rehabilitation Engineering
https://www.readbyqxmd.com/read/28591213/flight-of-the-bumble-bee-buzzes-predict-pollination-services
#12
Nicole E Miller-Struttmann, David Heise, Johannes Schul, Jennifer C Geib, Candace Galen
Multiple interacting factors drive recent declines in wild and managed bees, threatening their pollination services. Widespread and intensive monitoring could lead to more effective management of wild and managed bees. However, tracking their dynamic populations is costly. We tested the effectiveness of an inexpensive, noninvasive and passive acoustic survey technique for monitoring bumble bee behavior and pollination services. First, we assessed the relationship between the first harmonic of the flight buzz (characteristic frequency) and pollinator functional traits that influence pollination success using flight cage experiments and a literature search...
2017: PloS One
https://www.readbyqxmd.com/read/28582535/perceived-synchrony-of-frog-multimodal-signal-components-is%C3%A2-influenced-by-content-and-order
#13
Ryan C Taylor, Rachel A Page, Barrett A Klein, Michael J Ryan, Kimberly L Hunter
Multimodal signaling is common in communication systems. Depending on the species, individual signal components may be produced synchronously as a result of physiological constraint (fixed) or each component may be produced independently (fluid) in time. For animals that rely on fixed signals, a basic prediction is that asynchrony between the components should degrade the perception of signal salience, reducing receiver response. Male tĂșngara frogs, Physalaemus pustulosus, produce a fixed multisensory courtship signal by vocalizing with two call components (whines and chucks) and inflating a vocal sac (visual component)...
June 5, 2017: Integrative and Comparative Biology
https://www.readbyqxmd.com/read/28570414/head-shadow-and-binaural-squelch-for-unilaterally-deaf-cochlear-implantees
#14
Joshua G W Bernstein, Gerald I Schuchman, Arnaldo L Rivera
BACKGROUND: Cochlear implants (CIs) can improve speech-in-noise performance for listeners with unilateral sensorineural deafness. But these benefits are modest and in most cases are limited to head-shadow advantages, with little evidence of binaural squelch. HYPOTHESIS: The goal of the investigation was to determine whether CI listeners with normal hearing or moderate hearing loss in the contralateral ear would receive a larger head-shadow benefit for target speech and noise originating from opposite sides of the head, and whether listeners would experience binaural squelch in the free field in a test involving interfering talkers...
August 2017: Otology & Neurotology
https://www.readbyqxmd.com/read/28559384/a-role-for-auditory-corticothalamic-feedback-in-the-perception-of-complex-sounds
#15
Natsumi Y Homma, Max F K Happel, Fernando R Nodal, Frank W Ohl, Andrew J King, Victoria M Bajo
Feedback signals from the primary auditory cortex (A1) can shape the receptive field properties of neurons in the ventral division of the medial geniculate body (MGBv). However, the behavioral significance of corticothalamic modulation is unknown. The aim of this study was to elucidate the role of this descending pathway in the perception of complex sounds. We tested the ability of adult female ferrets to detect the presence of a mistuned harmonic in a complex tone using a positive conditioned go/no-go behavioral paradigm before and after the input from layer VI in A1 to MGBv was bilaterally and selectively eliminated using chromophore-targeted laser photolysis...
June 21, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/28534734/the-impact-of-single-sided-deafness-upon-music-appreciation
#16
Sarah Meehan, Elizabeth A Hough, Gemma Crundwell, Rachel Knappett, Mark Smith, David M Baguley
BACKGROUND: Many of the world's population have hearing loss in one ear; current statistics indicate that up to 10% of the population may be affected. Although the detrimental impact of bilateral hearing loss, hearing aids, and cochlear implants upon music appreciation is well recognized, studies on the influence of single-sided deafness (SSD) are sparse. PURPOSE: We sought to investigate whether a single-sided hearing loss can cause problems with music appreciation, despite normal hearing in the other ear...
May 2017: Journal of the American Academy of Audiology
https://www.readbyqxmd.com/read/28507127/development-of-visual-category-selectivity-in-ventral-visual-cortex-does-not-require-visual-experience
#17
Job van den Hurk, Marc Van Baelen, Hans P Op de Beeck
To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls...
May 30, 2017: Proceedings of the National Academy of Sciences of the United States of America
https://www.readbyqxmd.com/read/28481270/smartphone-based-escalator-recognition-for-the-visually-impaired
#18
Daiki Nakamura, Hotaka Takizawa, Mayumi Aoyagi, Nobuo Ezaki, Shinji Mizuno
It is difficult for visually impaired individuals to recognize escalators in everyday environments. If the individuals ride on escalators in the wrong direction, they will stumble on the steps. This paper proposes a novel method to assist visually impaired individuals in finding available escalators by the use of smartphone cameras. Escalators are recognized by analyzing optical flows in video frames captured by the cameras, and auditory feedback is provided to the individuals. The proposed method was implemented on an Android smartphone and applied to actual escalator scenes...
May 6, 2017: Sensors
https://www.readbyqxmd.com/read/28479403/normal-aging-slows-spontaneous-switching-in-auditory-and-visual-bistability
#19
Hirohito M Kondo, Takanori Kochiyama
Age-related changes in auditory and visual perception have an impact on the quality of life. It has been debated how perceptual organization is influenced by advancing age. From the neurochemical perspective, we investigated age effects on auditory and visual bistability. In perceptual bistability, a sequence of sensory inputs induces spontaneous switching between different perceptual objects. We used different modality tasks of auditory streaming and visual plaids. Young and middle-aged participants (20-60years) were instructed to indicate by a button press whenever their perception changed from one stable state to the other...
May 4, 2017: Neuroscience
https://www.readbyqxmd.com/read/28464690/how-many-images-are-in-an-auditory-scene
#20
Xuan Zhong, William A Yost
If an auditory scene consists of many spatially separated sound sources, how many sound sources can be processed by the auditory system? Experiment I determined how many speech sources could be localized simultaneously on the azimuth plane. Different words were played from multiple loudspeakers, and listeners reported the total number of sound sources and their individual locations. In experiment II the accuracy of localizing one speech source in a mixture of multiple speech sources was determined. An extra sound source was added to an existing set of sound sources, and the task was to localize that extra source...
April 2017: Journal of the Acoustical Society of America
keyword
keyword
106287
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"