keyword
MENU ▼
Read by QxMD icon Read
search

auditory scene

keyword
https://www.readbyqxmd.com/read/29049599/auditory-scene-analysis-an-attention-perspective
#1
Elyse S Sussman
Purpose: This review article provides a new perspective on the role of attention in auditory scene analysis. Method: A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception-from passive processes that organize unattended input to attention effects that act at different levels of the system...
October 17, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/29035691/how-we-hear-the-perception-and-neural-coding-of-sound
#2
Andrew J Oxenham
Auditory perception is our main gateway to communication with others via speech and music, and it also plays an important role in alerting and orienting us to new events. This review provides an overview of selected topics pertaining to the perception and neural coding of sound, starting with the first stage of filtering in the cochlea and its profound impact on perception. The next topic, pitch, has been debated for millennia, but recent technical and theoretical developments continue to provide us with new insights...
October 16, 2017: Annual Review of Psychology
https://www.readbyqxmd.com/read/28982139/a-bayesian-computational-basis-for-auditory-selective-attention-using-head-rotation-and-the-interaural-time-difference-cue
#3
Dillon A Hambrook, Marko Ilievski, Mohamad Mosadeghzad, Matthew Tata
The process of resolving mixtures of several sounds into their separate individual streams is known as auditory scene analysis and it remains a challenging task for computational systems. It is well-known that animals use binaural differences in arrival time and intensity at the two ears to find the arrival angle of sounds in the azimuthal plane, and this localization function has sometimes been considered sufficient to enable the un-mixing of complex scenes. However, the ability of such systems to resolve distinct sound sources in both space and frequency remains limited...
2017: PloS One
https://www.readbyqxmd.com/read/28964316/speech-processor-data-logging-helps-in-predicting-early-linguistic-outcomes-in-implanted-children
#4
Letizia Guerzoni, Domenico Cuda
OBJECTIVE: To analyse the value of listening-data logged in the speech processor on the prediction of the early auditory and linguistic skills in children who received a cochlear implant in their first 2 years of life. STUDY DESIGN: Prospective observational non-randomized study. METHODS: Ten children with profound congenital sensorineural hearing loss were included in the study. The mean age at CI activation was 16.9 months (SD ± 7.2; range 10-24)...
October 2017: International Journal of Pediatric Otorhinolaryngology
https://www.readbyqxmd.com/read/28954867/a-crucial-test-of-the-population-separation-model-of-auditory-stream-segregation-in-macaque-primary-auditory-cortex
#5
Yonatan I Fishman, Mimi Kim, Mitchell Steinschneider
An important aspect of auditory scene analysis is auditory stream segregation- the organization of sound sequences into perceptual streams reflecting different sound sources in the environment. Several models have been proposed to account for stream segregation. According to the 'population separation' (PS) model, alternating 'ABAB' tone sequences are perceived as a single stream or as two separate streams when 'A' and 'B' tones activate the same or distinct frequency-tuned neuronal populations in primary auditory cortex (A1), respectively...
September 27, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/28942322/adaptation-facilitates-spatial-discrimination-for-deviant-locations-in-the-thalamic-reticular-nucleus-of-the-rat
#6
Xin-Xiu Xu, Yu-Ying Zhai, Xiao-Kai Kou, Xiongjie Yu
The capacity to identify unanticipated abnormal cues in a natural scene is vital for animal survival. Stimulus-specific adaptation (SSA) has been considered the neuronal correlate for deviance detection. There have been comprehensive assessments of SSA in the frequency domain along the ascending auditory pathway, but only little attention given to deviance detection in the spatial domain. We found that thalamic reticular nucleus (TRN) neurons exhibited stronger responses to a tone when it was presented rarely as opposed to frequently at a certain spatial location...
September 20, 2017: Neuroscience
https://www.readbyqxmd.com/read/28922512/interaction-of-spatial-and-non-spatial-cues-in-auditory-stream-segregation-in-the-european-starling
#7
Naoya Itatani, Georg M Klump
Integrating sounds from the same source and segregating sounds from different sources in an acoustic scene is an essential function of the auditory system. Naturally, the auditory system simultaneously makes use of multiple cues. Here, we investigate the interaction between spatial cues and frequency cues in stream segregation of European starlings (Sturnus vulgaris) using an objective measure of perception. Neural responses to streaming sounds were recorded while the bird was performing a behavioral task that results in a higher sensitivity during a one-stream than a two-stream percept...
September 18, 2017: European Journal of Neuroscience
https://www.readbyqxmd.com/read/28870702/automatic-frequency-shift-detection-in-the-auditory-system-a-review-of-psychophysical-findings
#8
Laurent Demany, Catherine Semal
The human brain has the task of binding successive sounds produced by the same acoustic source into a coherent perceptual stream, and binding must be selective when several sources are concurrently active. Binding appears to obey a principle of spectral proximity: pure tones close in frequency are more likely to be bound than pure tones with remote frequencies. It has been hypothesized that the binding process is realized by automatic "frequency-shift detectors" (FSDs), comparable to the detectors of spatial motion in the visual system...
September 1, 2017: Neuroscience
https://www.readbyqxmd.com/read/28863557/identification-of-perceptually-relevant-methods-of-inter-aural-time-difference-estimation
#9
Areti Andreopoulou, Brian F G Katz
The inter-aural time difference (ITD) is a fundamental cue for human sound localization. Over the past decades several methods have been proposed for its estimation from measured head-related impulse response (HRIR) data. Nevertheless, inter-method variations in ITD calculation have been found to exceed the known just noticeable differences (JNDs), leading to possible perceptible artifacts in virtual binaural auditory scenes, when personalized HRIRs are being used. In the absence of an objective means for validating ITD estimations, this paper examines which methods lead to the most perceptually relevant results...
August 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28861034/playing-music-may-improve-the-gait-pattern-in-patients-with-bilateral-caloric-areflexia-wearing-a-cochlear-implant-results-from-a-pilot-study
#10
Ann Hallemans, Griet Mertens, Paul Van de Heyning, Vincent Van Rompaey
HYPOTHESIS: Auditory information through an active cochlear implant (CI) influences gait parameters in adults with bilateral caloric areflexia and profound sensorineural hearing loss. BACKGROUND: Patients with bilateral caloric areflexia suffer from imbalance, resulting in an increased risk of falling. In case of simultaneous deafness, the lack of auditory feedback results in less awareness of the auditory scene. This combination might produce significant challenges while walking and navigating...
2017: Frontiers in Neurology
https://www.readbyqxmd.com/read/28856615/effects-of-capacity-limits-memory-loss-and-sound-type-in-change-deafness
#11
Melissa K Gregg, Vanessa C Irsik, Joel S Snyder
Change deafness, the inability to notice changes to auditory scenes, has the potential to provide insights about sound perception in busy situations typical of everyday life. We determined the extent to which change deafness to sounds is due to the capacity of processing multiple sounds and the loss of memory for sounds over time. We also determined whether these processing limitations work differently for varying types of sounds within a scene. Auditory scenes composed of naturalistic sounds, spectrally dynamic unrecognizable sounds, tones, and noise rhythms were presented in a change-detection task...
August 30, 2017: Attention, Perception & Psychophysics
https://www.readbyqxmd.com/read/28821680/cortical-representations-of-speech-in-a-multitalker-auditory-scene
#12
Krishna C Puvvada, Jonathan Z Simon
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex...
September 20, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/28813033/a-vision-based-wayfinding-system-for-visually-impaired-people-using-situation-awareness-and-activity-based-instructions
#13
Eunjeong Ko, Eun Yi Kim
A significant challenge faced by visually impaired people is 'wayfinding', which is the ability to find one's way to a destination in an unfamiliar environment. This study develops a novel wayfinding system for smartphones that can automatically recognize the situation and scene objects in real time. Through analyzing streaming images, the proposed system first classifies the current situation of a user in terms of their location. Next, based on the current situation, only the necessary context objects are found and interpreted using computer vision techniques...
August 16, 2017: Sensors
https://www.readbyqxmd.com/read/28811257/auditory-conflict-and-congruence-in-frontotemporal-dementia
#14
Camilla N Clark, Jennifer M Nicholas, Jennifer L Agustus, Christopher J D Hardy, Lucy L Russell, Emilie V Brotherhood, Katrina M Dick, Charles R Marshall, Catherine J Mummery, Jonathan D Rohrer, Jason D Warren
Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence...
September 2017: Neuropsychologia
https://www.readbyqxmd.com/read/28792518/rendering-visual-events-as-sounds-spatial-attention-capture-by-auditory-augmented-reality
#15
Scott A Stone, Matthew S Tata
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events...
2017: PloS One
https://www.readbyqxmd.com/read/28764452/modeling-speech-localization-talker-identification-and-word-recognition-in-a-multi-talker-setting
#16
Angela Josupeit, Volker Hohmann
This study introduces a model for solving three different auditory tasks in a multi-talker setting: target localization, target identification, and word recognition. The model was used to simulate psychoacoustic data from a call-sign-based listening test involving multiple spatially separated talkers [Brungart and Simpson (2007). Percept. Psychophys. 69(1), 79-91]. The main characteristics of the model are (i) the extraction of salient auditory features ("glimpses") from the multi-talker signal and (ii) the use of a classification method that finds the best target hypothesis by comparing feature templates from clean target signals to the glimpses derived from the multi-talker mixture...
July 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28757195/selective-entrainment-of-brain-oscillations-drives-auditory-perceptual-organization
#17
Jordi Costa-Faidella, Elyse S Sussman, Carles Escera
Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept...
July 27, 2017: NeuroImage
https://www.readbyqxmd.com/read/28736736/feedback-driven-sensory-mapping-adaptation-for-robust-speech-activity-detection
#18
Ashwin Bellur, Mounya Elhilali
Parsing natural acoustic scenes using computational methodologies poses many challenges. Given the rich and complex nature of the acoustic environment, data mismatch between train and test conditions is a major hurdle in data-driven audio processing systems. In contrast, the brain exhibits a remarkable ability at segmenting acoustic scenes with relative ease. When tackling challenging listening conditions that are often faced in everyday life, the biological system relies on a number of principles that allow it to effortlessly parse its rich soundscape...
March 2017: IEEE/ACM Transactions on Audio, Speech, and Language Processing
https://www.readbyqxmd.com/read/28680950/change-deafness-for-real-spatialized-environmental-scenes
#19
Jeremy Gaston, Kelly Dickerson, Daniel Hipp, Peter Gerhardstein
The everyday auditory environment is complex and dynamic; often, multiple sounds co-occur and compete for a listener's cognitive resources. 'Change deafness', framed as the auditory analog to the well-documented phenomenon of 'change blindness', describes the finding that changes presented within complex environments are often missed. The present study examines a number of stimulus factors that may influence change deafness under real-world listening conditions. Specifically, an AX (same-different) discrimination task was used to examine the effects of both spatial separation over a loudspeaker array and the type of change (sound source additions and removals) on discrimination of changes embedded in complex backgrounds...
2017: Cognitive Research: Principles and Implications
https://www.readbyqxmd.com/read/28666215/just-look-away-gaze-aversions-as-an-overt-attentional-disengagement-mechanism
#20
Dekel Abeles, Shlomit Yuval-Greenberg
During visual exploration of a scene, the eye-gaze tends to be directed toward more salient image-locations, containing more information. However, while performing non-visual tasks, such information-seeking behavior could be detrimental to performance, as the perception of irrelevant but salient visual input may unnecessarily increase the cognitive-load. It would be therefore beneficial if during non-visual tasks, eye-gaze would be governed by a drive to reduce saliency rather than maximize it. The current study examined the phenomenon of gaze-aversion during non-visual tasks, which is hypothesized to act as an active avoidance mechanism...
November 2017: Cognition
keyword
keyword
106287
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"