keyword
MENU ▼
Read by QxMD icon Read
search

auditory scene

keyword
https://www.readbyqxmd.com/read/29214438/interactions-between-top-down-and-bottom-up-attention-in-barn-owls-tyto-alba
#1
Tidhar Lev-Ari, Yoram Gutfreund
Selective attention, the prioritization of behaviorally relevant stimuli for behavioral control, is commonly divided into two processes: bottom-up, stimulus-driven selection and top-down, task-driven selection. Here, we tested two barn owls in a visual search task that examines attentional capture of the top-down task by bottom-up mechanisms. We trained barn owls to search for a vertical Gabor patch embedded in a circular array of differently oriented Gabor distractors (top-down guided search). To track the point of gaze, a lightweight wireless video camera was mounted on the owl's head...
December 6, 2017: Animal Cognition
https://www.readbyqxmd.com/read/29213233/multisensory-and-modality-specific-influences-on-adaptation-to-optical-prisms
#2
Elena Calzolari, Federica Albini, Nadia Bolognini, Giuseppe Vallar
Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target...
2017: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/29205588/machine-learning-for-decoding-listeners-attention-from-eeg-evoked-by-continuous-speech
#3
Tobias de Taillez, Birger Kollmeier, Bernd T Meyer
Previous research has shown that it is possible to predict which speaker is attended in a multi-speaker scene by analyzing a listener's EEG activity. In this study, existing linear models that learn the mapping from neural activity to an attended speech envelope are replaced by a non-linear neural network. The proposed architecture takes into account the temporal context of the estimated envelope, and is evaluated using EEG data obtained from 20 normal-hearing listeners who focused on one speaker in a two-speaker setting...
December 4, 2017: European Journal of Neuroscience
https://www.readbyqxmd.com/read/29125987/contextual-modulation-of-sound-processing-in-the-auditory-cortex
#4
REVIEW
C Angeloni, M N Geffen
In everyday acoustic environments, we navigate through a maze of sounds that possess a complex spectrotemporal structure, spanning many frequencies and exhibiting temporal modulations that differ within frequency bands. Our auditory system needs to efficiently encode the same sounds in a variety of different contexts, while preserving the ability to separate complex sounds within an acoustic scene. Recent work in auditory neuroscience has made substantial progress in studying how sounds are represented in the auditory system under different contexts, demonstrating that auditory processing of seemingly simple acoustic features, such as frequency and time, is highly dependent on co-occurring acoustic and behavioral stimuli...
November 7, 2017: Current Opinion in Neurobiology
https://www.readbyqxmd.com/read/29108832/temporal-processing-in-audition-insights-from-music
#5
Vani G Rajendran, Sundeep Teki, Jan W H Schnupp
Music is a curious example of a temporally patterned acoustic stimulus, and a compelling pan-cultural phenomenon. This review strives to bring some insights from decades of music psychology and sensorimotor synchronization (SMS) literature into the mainstream auditory domain, arguing that musical rhythm perception is shaped in important ways by temporal processing mechanisms in the brain. The feature that unites these disparate disciplines is an appreciation of the central importance of timing, sequencing, and anticipation...
November 3, 2017: Neuroscience
https://www.readbyqxmd.com/read/29090640/listening-into-2030-workshop-an-experiment-in-envisioning-the-future-of-hearing-and-communication-science
#6
Simon Carlile, Gregory Ciccarelli, Jane Cockburn, Anna C Diedesch, Megan K Finnegan, Ervin Hafter, Simon Henin, Sridhar Kalluri, Alexander J E Kell, Erol J Ozmeral, Casey L Roark, Jessica E Sagers
Here we report the methods and output of a workshop examining possible futures of speech and hearing science out to 2030. Using a design thinking approach, a range of human-centered problems in communication were identified that could provide the motivation for a wide range of research. Nine main research programs were distilled and are summarized: (a) measuring brain and other physiological parameters, (b) auditory and multimodal displays of information, (c) auditory scene analysis, (d) enabling and understanding shared auditory virtual spaces, (e) holistic approaches to health management and hearing impairment, (f) universal access to evolving and individualized technologies, (g) biological intervention for hearing dysfunction, (h) understanding the psychosocial interactions with technology and other humans as mediated by technology, and (i) the impact of changing models of security and privacy...
January 2017: Trends in Hearing
https://www.readbyqxmd.com/read/29070441/reduced-auditory-segmentation-potentials-in-first-episode-schizophrenia
#7
Brian A Coffman, Sarah M Haigh, Timothy K Murphy, Justin Leiter-Mcbeth, Dean F Salisbury
Auditory scene analysis (ASA) dysfunction is likely an important component of the symptomatology of schizophrenia. Auditory object segmentation, the grouping of sequential acoustic elements into temporally-distinct auditory objects, can be assessed with electroencephalography through measurement of the auditory segmentation potential (ASP). Further, N2 responses to the initial and final elements of auditory objects are enhanced relative to medial elements, which may indicate auditory object edge detection (initiation and termination)...
October 22, 2017: Schizophrenia Research
https://www.readbyqxmd.com/read/29049599/auditory-scene-analysis-an-attention-perspective
#8
Elyse S Sussman
Purpose: This review article provides a new perspective on the role of attention in auditory scene analysis. Method: A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception-from passive processes that organize unattended input to attention effects that act at different levels of the system...
October 17, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/29035691/how-we-hear-the-perception-and-neural-coding-of-sound
#9
Andrew J Oxenham
Auditory perception is our main gateway to communication with others via speech and music, and it also plays an important role in alerting and orienting us to new events. This review provides an overview of selected topics pertaining to the perception and neural coding of sound, starting with the first stage of filtering in the cochlea and its profound impact on perception. The next topic, pitch, has been debated for millennia, but recent technical and theoretical developments continue to provide us with new insights...
October 16, 2017: Annual Review of Psychology
https://www.readbyqxmd.com/read/28982139/a-bayesian-computational-basis-for-auditory-selective-attention-using-head-rotation-and-the-interaural-time-difference-cue
#10
Dillon A Hambrook, Marko Ilievski, Mohamad Mosadeghzad, Matthew Tata
The process of resolving mixtures of several sounds into their separate individual streams is known as auditory scene analysis and it remains a challenging task for computational systems. It is well-known that animals use binaural differences in arrival time and intensity at the two ears to find the arrival angle of sounds in the azimuthal plane, and this localization function has sometimes been considered sufficient to enable the un-mixing of complex scenes. However, the ability of such systems to resolve distinct sound sources in both space and frequency remains limited...
2017: PloS One
https://www.readbyqxmd.com/read/28964316/speech-processor-data-logging-helps-in-predicting-early-linguistic-outcomes-in-implanted-children
#11
Letizia Guerzoni, Domenico Cuda
OBJECTIVE: To analyse the value of listening-data logged in the speech processor on the prediction of the early auditory and linguistic skills in children who received a cochlear implant in their first 2 years of life. STUDY DESIGN: Prospective observational non-randomized study. METHODS: Ten children with profound congenital sensorineural hearing loss were included in the study. The mean age at CI activation was 16.9 months (SD ± 7.2; range 10-24)...
October 2017: International Journal of Pediatric Otorhinolaryngology
https://www.readbyqxmd.com/read/28954867/a-crucial-test-of-the-population-separation-model-of-auditory-stream-segregation-in-macaque-primary-auditory-cortex
#12
Yonatan I Fishman, Mimi Kim, Mitchell Steinschneider
An important aspect of auditory scene analysis is auditory stream segregation-the organization of sound sequences into perceptual streams reflecting different sound sources in the environment. Several models have been proposed to account for stream segregation. According to the "population separation" (PS) model, alternating ABAB tone sequences are perceived as a single stream or as two separate streams when "A" and "B" tones activate the same or distinct frequency-tuned neuronal populations in primary auditory cortex (A1), respectively...
November 1, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/28942322/adaptation-facilitates-spatial-discrimination-for-deviant-locations-in-the-thalamic-reticular-nucleus-of-the-rat
#13
Xin-Xiu Xu, Yu-Ying Zhai, Xiao-Kai Kou, Xiongjie Yu
The capacity to identify unanticipated abnormal cues in a natural scene is vital for animal survival. Stimulus-specific adaptation (SSA) has been considered the neuronal correlate for deviance detection. There have been comprehensive assessments of SSA in the frequency domain along the ascending auditory pathway, but only little attention given to deviance detection in the spatial domain. We found that thalamic reticular nucleus (TRN) neurons exhibited stronger responses to a tone when it was presented rarely as opposed to frequently at a certain spatial location...
December 4, 2017: Neuroscience
https://www.readbyqxmd.com/read/28922512/interaction-of-spatial-and-non-spatial-cues-in-auditory-stream-segregation-in-the-european-starling
#14
Naoya Itatani, Georg M Klump
Integrating sounds from the same source and segregating sounds from different sources in an acoustic scene is an essential function of the auditory system. Naturally, the auditory system simultaneously makes use of multiple cues. Here, we investigate the interaction between spatial cues and frequency cues in stream segregation of European starlings (Sturnus vulgaris) using an objective measure of perception. Neural responses to streaming sounds were recorded while the bird was performing a behavioral task that results in a higher sensitivity during a one-stream than a two-stream percept...
September 18, 2017: European Journal of Neuroscience
https://www.readbyqxmd.com/read/28870702/automatic-frequency-shift-detection-in-the-auditory-system-a-review-of-psychophysical-findings
#15
Laurent Demany, Catherine Semal
The human brain has the task of binding successive sounds produced by the same acoustic source into a coherent perceptual stream, and binding must be selective when several sources are concurrently active. Binding appears to obey a principle of spectral proximity: pure tones close in frequency are more likely to be bound than pure tones with remote frequencies. It has been hypothesized that the binding process is realized by automatic "frequency-shift detectors" (FSDs), comparable to the detectors of spatial motion in the visual system...
September 1, 2017: Neuroscience
https://www.readbyqxmd.com/read/28863557/identification-of-perceptually-relevant-methods-of-inter-aural-time-difference-estimation
#16
Areti Andreopoulou, Brian F G Katz
The inter-aural time difference (ITD) is a fundamental cue for human sound localization. Over the past decades several methods have been proposed for its estimation from measured head-related impulse response (HRIR) data. Nevertheless, inter-method variations in ITD calculation have been found to exceed the known just noticeable differences (JNDs), leading to possible perceptible artifacts in virtual binaural auditory scenes, when personalized HRIRs are being used. In the absence of an objective means for validating ITD estimations, this paper examines which methods lead to the most perceptually relevant results...
August 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/28861034/playing-music-may-improve-the-gait-pattern-in-patients-with-bilateral-caloric-areflexia-wearing-a-cochlear-implant-results-from-a-pilot-study
#17
Ann Hallemans, Griet Mertens, Paul Van de Heyning, Vincent Van Rompaey
HYPOTHESIS: Auditory information through an active cochlear implant (CI) influences gait parameters in adults with bilateral caloric areflexia and profound sensorineural hearing loss. BACKGROUND: Patients with bilateral caloric areflexia suffer from imbalance, resulting in an increased risk of falling. In case of simultaneous deafness, the lack of auditory feedback results in less awareness of the auditory scene. This combination might produce significant challenges while walking and navigating...
2017: Frontiers in Neurology
https://www.readbyqxmd.com/read/28856615/effects-of-capacity-limits-memory-loss-and-sound-type-in-change-deafness
#18
Melissa K Gregg, Vanessa C Irsik, Joel S Snyder
Change deafness, the inability to notice changes to auditory scenes, has the potential to provide insights about sound perception in busy situations typical of everyday life. We determined the extent to which change deafness to sounds is due to the capacity of processing multiple sounds and the loss of memory for sounds over time. We also determined whether these processing limitations work differently for varying types of sounds within a scene. Auditory scenes composed of naturalistic sounds, spectrally dynamic unrecognizable sounds, tones, and noise rhythms were presented in a change-detection task...
August 30, 2017: Attention, Perception & Psychophysics
https://www.readbyqxmd.com/read/28821680/cortical-representations-of-speech-in-a-multitalker-auditory-scene
#19
Krishna C Puvvada, Jonathan Z Simon
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex...
September 20, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://www.readbyqxmd.com/read/28813033/a-vision-based-wayfinding-system-for-visually-impaired-people-using-situation-awareness-and-activity-based-instructions
#20
Eunjeong Ko, Eun Yi Kim
A significant challenge faced by visually impaired people is 'wayfinding', which is the ability to find one's way to a destination in an unfamiliar environment. This study develops a novel wayfinding system for smartphones that can automatically recognize the situation and scene objects in real time. Through analyzing streaming images, the proposed system first classifies the current situation of a user in terms of their location. Next, based on the current situation, only the necessary context objects are found and interpreted using computer vision techniques...
August 16, 2017: Sensors
keyword
keyword
106287
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"