keyword
MENU ▼
Read by QxMD icon Read
search

auditory scene

keyword
https://www.readbyqxmd.com/read/29578754/auditory-task-irrelevance-a-basis-for-inattentional-deafness
#1
Menja Scheer, Heinrich H Bülthoff, Lewis L Chuang
Objective This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality. Background Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one's capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel...
March 1, 2018: Human Factors
https://www.readbyqxmd.com/read/29563861/assessing-top-down-and-bottom-up-contributions-to-auditory-stream-segregation-and-integration-with-polyphonic-music
#2
Niels R Disbergen, Giancarlo Valente, Elia Formisano, Robert J Zatorre
Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments...
2018: Frontiers in Neuroscience
https://www.readbyqxmd.com/read/29543178/diffraction-kernels-for-interactive-sound-propagation-in-dynamic-environments
#3
Atul Rungta, Carl Schissler, Nicholas Rewkowski, Ravish Mehra, Dinesh Manocha
We present a novel method to generate plausible diffraction effects for interactive sound propagation in dynamic scenes. Our approach precomputes a diffraction kernel for each dynamic object in the scene and combines them with interactive ray tracing algorithms at runtime. A diffraction kernel encapsulates the sound interaction behavior of individual objects in the free field and we present a new source placement algorithm to significantly accelerate the precomputation. Our overall propagation algorithm can handle highly-tessellated or smooth objects undergoing rigid motion...
April 2018: IEEE Transactions on Visualization and Computer Graphics
https://www.readbyqxmd.com/read/29531082/psychophysical-evidence-for-auditory-motion-parallax
#4
Daria Genzel, Michael Schutte, W Owen Brimijoin, Paul R MacNeilage, Lutz Wiegrebe
Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources...
March 12, 2018: Proceedings of the National Academy of Sciences of the United States of America
https://www.readbyqxmd.com/read/29530816/structural-and-functional-brain-network-of-human-retrosplenial-cortex
#5
Panlong Li, Han Shan, Shengxiang Liang, Binbin Nie, Shaofeng Duan, Qi Huang, Tianhao Zhang, Xi Sun, Ting Feng, Lin Ma, Baoci Shan, Demin Li, Hua Liu
Retrosplenial cortex (RSC) plays a key role in various cognitive functions. The fiber connectivity of RSC had been reported in rodent and primate studies by tracer injection methods To explore structural and functional connectivity of two sub-regions of RSC, Brodmann area (BA)29 and BA30, we constructed fiber connectivity networks of two sub-regions by diffusion tensor imaging (DTI) tractography based on diffusion magnetic resonance imaging (MRI) and functional connectivity networks by resting-state functional MRI...
March 9, 2018: Neuroscience Letters
https://www.readbyqxmd.com/read/29526594/memory-consolidation-is-linked-to-spindle-mediated-information-processing-during-sleep
#6
Scott A Cairney, Anna Á Váli Guttesen, Nicole El Marj, Bernhard P Staresina
How are brief encounters transformed into lasting memories? Previous research has established the role of non-rapid eye movement (NREM) sleep, along with its electrophysiological signatures of slow oscillations (SOs) and spindles, for memory consolidation [1-4]. In related work, experimental manipulations have demonstrated that NREM sleep provides a window of opportunity to selectively strengthen particular memory traces via the delivery of auditory cues [5-10], a procedure known as targeted memory reactivation (TMR)...
March 19, 2018: Current Biology: CB
https://www.readbyqxmd.com/read/29518569/acoustic-and-higher-level-representations-of-naturalistic-auditory-scenes-in-human-auditory-and-frontal-cortex
#7
Lars Hausfeld, Lars Riecke, Elia Formisano
Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories...
March 6, 2018: NeuroImage
https://www.readbyqxmd.com/read/29508954/perception-of-scenes-in-different-sensory-modalities-a-result-of-modal-completion
#8
Ronald R Gruber, Richard A Block
Dynamic perception includes amodal and modal completion, along with apparent movement. It fills temporal gaps for single objects. In 2 experiments, using 6 stimulus presentation conditions involving 3 sensory modalities, participants experienced 8-10 sequential stimuli (200 ms each) with interstimulus intervals (ISIs) of 0.25-7.0 s. Experiments focused on spatiotemporal completion (walking), featural completion (object changing), auditory, completion (falling bomb), and haptic changes (insect crawling). After each trial, participants judged whether they experienced the process of "happening " or whether they simply knew that the process must have occurred...
April 2017: American Journal of Psychology
https://www.readbyqxmd.com/read/29467495/representations-of-naturalistic-stimulus-complexity-in-early-and-associative-visual-and-auditory-cortices
#9
Yağmur Güçlütürk, Umut Güçlü, Marcel van Gerven, Rob van Lier
The complexity of sensory stimuli has an important role in perception and cognition. However, its neural representation is not well understood. Here, we characterize the representations of naturalistic visual and auditory stimulus complexity in early and associative visual and auditory cortices. This is realized by means of encoding and decoding analyses of two fMRI datasets in the visual and auditory modalities. Our results implicate most early and some associative sensory areas in representing the complexity of naturalistic sensory stimuli...
February 21, 2018: Scientific Reports
https://www.readbyqxmd.com/read/29398142/low-and-high-frequency-cortical-brain-oscillations-reflect-dissociable-mechanisms-of-concurrent-speech-segregation-in-noise
#10
Anusha Yellamsetty, Gavin M Bidelman
Parsing simultaneous speech requires listeners use pitch-guided segregation which can be affected by the signal-to-noise ratio (SNR) in the auditory scene. The interaction of these two cues may occur at multiple levels within the cortex. The aims of the current study were to assess the correspondence between oscillatory brain rhythms and determine how listeners exploit pitch and SNR cues to successfully segregate concurrent speech. We recorded electrical brain activity while participants heard double-vowel stimuli whose fundamental frequencies (F0s) differed by zero or four semitones (STs) presented in either clean or noise-degraded (+5 dB SNR) conditions...
February 2, 2018: Hearing Research
https://www.readbyqxmd.com/read/29395914/integration-of-visual-information-in-auditory-cortex-promotes-auditory-scene-analysis-through-multisensory-binding
#11
Huriye Atilgan, Stephen M Town, Katherine C Wood, Gareth P Jones, Ross K Maddox, Adrian K C Lee, Jennifer K Bizley
How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound...
January 24, 2018: Neuron
https://www.readbyqxmd.com/read/29390738/how-does-the-perceptual-organization-of-a-multi-tone-mixture-interact-with-partial-and-global-loudness-judgments
#12
Michaël Vannier, Nicolas Misdariis, Patrick Susini, Nicolas Grimault
Two experiments were conducted to investigate how the perceptual organization of a multi-tone mixture interacts with global and partial loudness judgments. Grouping (single-object) and segregating (two-object) conditions were created using frequency modulation by applying the same or different modulation frequencies to the odd- and even-rank harmonics. While in Experiment 1 (Exp. 1) the two objects had the same loudness, in Experiment 2 (Exp. 2), loudness level differences (LLD) were introduced (LLD = 6, 12, 18, or 24 phons)...
January 2018: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/29380283/sound-changes-that-lead-to-seeing-longer-lasting-shapes
#13
Arthur G Samuel, Kavya Tangella
To survive, people must construct an accurate representation of the world around them. There is a body of research on visual scene analysis, and a largely separate literature on auditory scene analysis. The current study follows up research from the smaller literature on audiovisual scene analysis. Prior work demonstrated that when there is an abrupt size change to a moving object, observers tend to see two objects rather than one-the abrupt visual change enhances visible persistence of the briefly presented different-sized object...
January 29, 2018: Attention, Perception & Psychophysics
https://www.readbyqxmd.com/read/29315468/when-do-trauma-patients-lose-temperature-a-prospective-observational-study
#14
S C Eidstuen, O Uleberg, G Vangberg, E Skogvoll
BACKGROUND: The prevalence of hypothermia in trauma patients is high and rapid recognition is important to prevent further heat loss. Hypothermia is associated with poor patient outcomes and is an independent predictor of increased mortality. The aim of this study was to analyze the changes in core body temperature of trauma patients during different treatment phases in the pre-hospital and early in-hospital settings. METHODS: A prospective observational cohort study in severely injured patients...
March 2018: Acta Anaesthesiologica Scandinavica
https://www.readbyqxmd.com/read/29289075/masking-release-by-combined-spatial-and-masker-fluctuation-effects-in-the-open-sound-field
#15
John C Middlebrooks
In a complex auditory scene, signals of interest can be distinguished from masking sounds by differences in source location [spatial release from masking (SRM)] and by differences between masker-alone and masker-plus-signal envelopes. This study investigated interactions between those factors in release of masking of 700-Hz tones in an open sound field. Signal and masker sources were colocated in front of the listener, or the signal source was shifted 90° to the side. In Experiment 1, the masker contained a 25-Hz-wide on-signal band plus flanking bands having envelopes that were either mutually uncorrelated or were comodulated...
December 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/29250827/predictive-coding-in-auditory-perception-challenges-and-unresolved-questions
#16
REVIEW
Susan L Denham, István Winkler
Predictive coding is arguably the currently dominant theoretical framework for the study of perception. It has been employed to explain important auditory perceptual phenomena, and it has inspired theoretical, experimental and computational modelling efforts aimed at describing how the auditory system parses the complex sound input into meaningful units (auditory scene analysis). These efforts have uncovered some vital questions, addressing which could help to further specify predictive coding and clarify some of its basic assumptions...
December 18, 2017: European Journal of Neuroscience
https://www.readbyqxmd.com/read/29247467/release-from-informational-masking-by-auditory-stream-segregation-perception-and-its-neural-correlate
#17
Lena-Vanessa Dolležal, Sandra Tolnai, Rainer Beutelmann, Georg M Klump
In the analysis of acoustic scenes, we easily miss sounds or are insensitive to sound features that are salient if presented in isolation. This insensitivity that is not due to interference in the inner ear is termed informational masking (IM). So far, the cellular mechanisms underlying IM remained elusive. Here, we apply a sequential IM paradigm to humans and gerbils using a sound level increment detection task determining the sensitivity to target tones in a background of standard (same frequency) and distracting tones (varying in level and frequency)...
December 15, 2017: European Journal of Neuroscience
https://www.readbyqxmd.com/read/29214438/interactions-between-top-down-and-bottom-up-attention-in-barn-owls-tyto-alba
#18
Tidhar Lev-Ari, Yoram Gutfreund
Selective attention, the prioritization of behaviorally relevant stimuli for behavioral control, is commonly divided into two processes: bottom-up, stimulus-driven selection and top-down, task-driven selection. Here, we tested two barn owls in a visual search task that examines attentional capture of the top-down task by bottom-up mechanisms. We trained barn owls to search for a vertical Gabor patch embedded in a circular array of differently oriented Gabor distractors (top-down guided search). To track the point of gaze, a lightweight wireless video camera was mounted on the owl's head...
March 2018: Animal Cognition
https://www.readbyqxmd.com/read/29213233/multisensory-and-modality-specific-influences-on-adaptation-to-optical-prisms
#19
Elena Calzolari, Federica Albini, Nadia Bolognini, Giuseppe Vallar
Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target...
2017: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/29205588/machine-learning-for-decoding-listeners-attention-from-electroencephalography-evoked-by-continuous-speech
#20
Tobias de Taillez, Birger Kollmeier, Bernd T Meyer
Previous research has shown that it is possible to predict which speaker is attended in a multispeaker scene by analyzing a listener's electroencephalography (EEG) activity. In this study, existing linear models that learn the mapping from neural activity to an attended speech envelope are replaced by a non-linear neural network (NN). The proposed architecture takes into account the temporal context of the estimated envelope and is evaluated using EEG data obtained from 20 normal-hearing listeners who focused on one speaker in a two-speaker setting...
December 4, 2017: European Journal of Neuroscience
keyword
keyword
106287
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"