keyword
MENU ▼
Read by QxMD icon Read
search

auditory scene

keyword
https://www.readbyqxmd.com/read/29398142/low-and-high-frequency-cortical-brain-oscillations-reflect-dissociable-mechanisms-of-concurrent-speech-segregation-in-noise
#1
Anusha Yellamsetty, Gavin M Bidelman
Parsing simultaneous speech requires listeners use pitch-guided segregation which can be affected by the signal-to-noise ratio (SNR) in the auditory scene. The interaction of these two cues may occur at multiple levels within the cortex. The aims of the current study were to assess the correspondence between oscillatory brain rhythms and determine how listeners exploit pitch and SNR cues to successfully segregate concurrent speech. We recorded electrical brain activity while participants heard double-vowel stimuli whose fundamental frequencies (F0s) differed by zero or four semitones (STs) presented in either clean or noise-degraded (+5 dB SNR) conditions...
February 2, 2018: Hearing Research
https://www.readbyqxmd.com/read/29395914/integration-of-visual-information-in-auditory-cortex-promotes-auditory-scene-analysis-through-multisensory-binding
#2
Huriye Atilgan, Stephen M Town, Katherine C Wood, Gareth P Jones, Ross K Maddox, Adrian K C Lee, Jennifer K Bizley
How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound...
January 24, 2018: Neuron
https://www.readbyqxmd.com/read/29390738/how-does-the-perceptual-organization-of-a-multi-tone-mixture-interact-with-partial-and-global-loudness-judgments
#3
Michaël Vannier, Nicolas Misdariis, Patrick Susini, Nicolas Grimault
Two experiments were conducted to investigate how the perceptual organization of a multi-tone mixture interacts with global and partial loudness judgments. Grouping (single-object) and segregating (two-object) conditions were created using frequency modulation by applying the same or different modulation frequencies to the odd- and even-rank harmonics. While in Experiment 1 (Exp. 1) the two objects had the same loudness, in Experiment 2 (Exp. 2), loudness level differences (LLD) were introduced (LLD = 6, 12, 18, or 24 phons)...
January 2018: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/29380283/sound-changes-that-lead-to-seeing-longer-lasting-shapes
#4
Arthur G Samuel, Kavya Tangella
To survive, people must construct an accurate representation of the world around them. There is a body of research on visual scene analysis, and a largely separate literature on auditory scene analysis. The current study follows up research from the smaller literature on audiovisual scene analysis. Prior work demonstrated that when there is an abrupt size change to a moving object, observers tend to see two objects rather than one-the abrupt visual change enhances visible persistence of the briefly presented different-sized object...
January 29, 2018: Attention, Perception & Psychophysics
https://www.readbyqxmd.com/read/29315468/when-do-trauma-patients-lose-temperature-a-prospective-observational-study
#5
S C Eidstuen, O Uleberg, G Vangberg, E Skogvoll
BACKGROUND: The prevalence of hypothermia in trauma patients is high and rapid recognition is important to prevent further heat loss. Hypothermia is associated with poor patient outcomes and is an independent predictor of increased mortality. The aim of this study was to analyze the changes in core body temperature of trauma patients during different treatment phases in the pre-hospital and early in-hospital settings. METHODS: A prospective observational cohort study in severely injured patients...
March 2018: Acta Anaesthesiologica Scandinavica
https://www.readbyqxmd.com/read/29289075/masking-release-by-combined-spatial-and-masker-fluctuation-effects-in-the-open-sound-field
#6
John C Middlebrooks
In a complex auditory scene, signals of interest can be distinguished from masking sounds by differences in source location [spatial release from masking (SRM)] and by differences between masker-alone and masker-plus-signal envelopes. This study investigated interactions between those factors in release of masking of 700-Hz tones in an open sound field. Signal and masker sources were colocated in front of the listener, or the signal source was shifted 90° to the side. In Experiment 1, the masker contained a 25-Hz-wide on-signal band plus flanking bands having envelopes that were either mutually uncorrelated or were comodulated...
December 2017: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/29250827/predictive-coding-in-auditory-perception-challenges-and-unresolved-questions
#7
Susan L Denham, István Winkler
Predictive coding is arguably the currently dominant theoretical framework for the study of perception. It has been employed to explain important auditory perceptual phenomena, and it has inspired theoretical, experimental, and computational modelling efforts aimed at describing how the auditory system parses the complex sound input into meaningful units (auditory scene analysis). These efforts have uncovered some vital questions, addressing which could help to further specify predictive coding and clarify some of its basic assumptions...
December 18, 2017: European Journal of Neuroscience
https://www.readbyqxmd.com/read/29247467/release-from-informational-masking-by-auditory-stream-segregation-perception-and-its-neural-correlate
#8
Lena-Vanessa Dolležal, Sandra Tolnai, Rainer Beutelmann, Georg M Klump
In the analysis of acoustic scenes we easily miss sounds or are insensitive to sound features that are salient if presented in isolation. This insensitivity that is not due to interference in the inner ear is termed informational masking (IM). So far, the cellular mechanisms underlying IM remained elusive. Here, we apply a sequential IM paradigm to humans and gerbils using a sound-level-increment-detection task determining the sensitivity to target tones in a background of standard (same frequency) and distracting tones (varying in level and frequency)...
December 15, 2017: European Journal of Neuroscience
https://www.readbyqxmd.com/read/29214438/interactions-between-top-down-and-bottom-up-attention-in-barn-owls-tyto-alba
#9
Tidhar Lev-Ari, Yoram Gutfreund
Selective attention, the prioritization of behaviorally relevant stimuli for behavioral control, is commonly divided into two processes: bottom-up, stimulus-driven selection and top-down, task-driven selection. Here, we tested two barn owls in a visual search task that examines attentional capture of the top-down task by bottom-up mechanisms. We trained barn owls to search for a vertical Gabor patch embedded in a circular array of differently oriented Gabor distractors (top-down guided search). To track the point of gaze, a lightweight wireless video camera was mounted on the owl's head...
December 6, 2017: Animal Cognition
https://www.readbyqxmd.com/read/29213233/multisensory-and-modality-specific-influences-on-adaptation-to-optical-prisms
#10
Elena Calzolari, Federica Albini, Nadia Bolognini, Giuseppe Vallar
Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target...
2017: Frontiers in Human Neuroscience
https://www.readbyqxmd.com/read/29205588/machine-learning-for-decoding-listeners-attention-from-eeg-evoked-by-continuous-speech
#11
Tobias de Taillez, Birger Kollmeier, Bernd T Meyer
Previous research has shown that it is possible to predict which speaker is attended in a multi-speaker scene by analyzing a listener's EEG activity. In this study, existing linear models that learn the mapping from neural activity to an attended speech envelope are replaced by a non-linear neural network. The proposed architecture takes into account the temporal context of the estimated envelope, and is evaluated using EEG data obtained from 20 normal-hearing listeners who focused on one speaker in a two-speaker setting...
December 4, 2017: European Journal of Neuroscience
https://www.readbyqxmd.com/read/29125987/contextual-modulation-of-sound-processing-in-the-auditory-cortex
#12
REVIEW
C Angeloni, M N Geffen
In everyday acoustic environments, we navigate through a maze of sounds that possess a complex spectrotemporal structure, spanning many frequencies and exhibiting temporal modulations that differ within frequency bands. Our auditory system needs to efficiently encode the same sounds in a variety of different contexts, while preserving the ability to separate complex sounds within an acoustic scene. Recent work in auditory neuroscience has made substantial progress in studying how sounds are represented in the auditory system under different contexts, demonstrating that auditory processing of seemingly simple acoustic features, such as frequency and time, is highly dependent on co-occurring acoustic and behavioral stimuli...
November 7, 2017: Current Opinion in Neurobiology
https://www.readbyqxmd.com/read/29108832/temporal-processing-in-audition-insights-from-music
#13
Vani G Rajendran, Sundeep Teki, Jan W H Schnupp
Music is a curious example of a temporally patterned acoustic stimulus, and a compelling pan-cultural phenomenon. This review strives to bring some insights from decades of music psychology and sensorimotor synchronization (SMS) literature into the mainstream auditory domain, arguing that musical rhythm perception is shaped in important ways by temporal processing mechanisms in the brain. The feature that unites these disparate disciplines is an appreciation of the central importance of timing, sequencing, and anticipation...
November 3, 2017: Neuroscience
https://www.readbyqxmd.com/read/29090640/listening-into-2030-workshop-an-experiment-in-envisioning-the-future-of-hearing-and-communication-science
#14
Simon Carlile, Gregory Ciccarelli, Jane Cockburn, Anna C Diedesch, Megan K Finnegan, Ervin Hafter, Simon Henin, Sridhar Kalluri, Alexander J E Kell, Erol J Ozmeral, Casey L Roark, Jessica E Sagers
Here we report the methods and output of a workshop examining possible futures of speech and hearing science out to 2030. Using a design thinking approach, a range of human-centered problems in communication were identified that could provide the motivation for a wide range of research. Nine main research programs were distilled and are summarized: (a) measuring brain and other physiological parameters, (b) auditory and multimodal displays of information, (c) auditory scene analysis, (d) enabling and understanding shared auditory virtual spaces, (e) holistic approaches to health management and hearing impairment, (f) universal access to evolving and individualized technologies, (g) biological intervention for hearing dysfunction, (h) understanding the psychosocial interactions with technology and other humans as mediated by technology, and (i) the impact of changing models of security and privacy...
January 2017: Trends in Hearing
https://www.readbyqxmd.com/read/29070441/reduced-auditory-segmentation-potentials-in-first-episode-schizophrenia
#15
Brian A Coffman, Sarah M Haigh, Timothy K Murphy, Justin Leiter-Mcbeth, Dean F Salisbury
Auditory scene analysis (ASA) dysfunction is likely an important component of the symptomatology of schizophrenia. Auditory object segmentation, the grouping of sequential acoustic elements into temporally-distinct auditory objects, can be assessed with electroencephalography through measurement of the auditory segmentation potential (ASP). Further, N2 responses to the initial and final elements of auditory objects are enhanced relative to medial elements, which may indicate auditory object edge detection (initiation and termination)...
October 22, 2017: Schizophrenia Research
https://www.readbyqxmd.com/read/29049599/auditory-scene-analysis-an-attention-perspective
#16
Elyse S Sussman
Purpose: This review article provides a new perspective on the role of attention in auditory scene analysis. Method: A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception-from passive processes that organize unattended input to attention effects that act at different levels of the system...
October 17, 2017: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/29035691/how-we-hear-the-perception-and-neural-coding-of-sound
#17
Andrew J Oxenham
Auditory perception is our main gateway to communication with others via speech and music, and it also plays an important role in alerting and orienting us to new events. This review provides an overview of selected topics pertaining to the perception and neural coding of sound, starting with the first stage of filtering in the cochlea and its profound impact on perception. The next topic, pitch, has been debated for millennia, but recent technical and theoretical developments continue to provide us with new insights...
October 16, 2017: Annual Review of Psychology
https://www.readbyqxmd.com/read/28982139/a-bayesian-computational-basis-for-auditory-selective-attention-using-head-rotation-and-the-interaural-time-difference-cue
#18
Dillon A Hambrook, Marko Ilievski, Mohamad Mosadeghzad, Matthew Tata
The process of resolving mixtures of several sounds into their separate individual streams is known as auditory scene analysis and it remains a challenging task for computational systems. It is well-known that animals use binaural differences in arrival time and intensity at the two ears to find the arrival angle of sounds in the azimuthal plane, and this localization function has sometimes been considered sufficient to enable the un-mixing of complex scenes. However, the ability of such systems to resolve distinct sound sources in both space and frequency remains limited...
2017: PloS One
https://www.readbyqxmd.com/read/28964316/speech-processor-data-logging-helps-in-predicting-early-linguistic-outcomes-in-implanted-children
#19
Letizia Guerzoni, Domenico Cuda
OBJECTIVE: To analyse the value of listening-data logged in the speech processor on the prediction of the early auditory and linguistic skills in children who received a cochlear implant in their first 2 years of life. STUDY DESIGN: Prospective observational non-randomized study. METHODS: Ten children with profound congenital sensorineural hearing loss were included in the study. The mean age at CI activation was 16.9 months (SD ± 7.2; range 10-24)...
October 2017: International Journal of Pediatric Otorhinolaryngology
https://www.readbyqxmd.com/read/28954867/a-crucial-test-of-the-population-separation-model-of-auditory-stream-segregation-in-macaque-primary-auditory-cortex
#20
Yonatan I Fishman, Mimi Kim, Mitchell Steinschneider
An important aspect of auditory scene analysis is auditory stream segregation-the organization of sound sequences into perceptual streams reflecting different sound sources in the environment. Several models have been proposed to account for stream segregation. According to the "population separation" (PS) model, alternating ABAB tone sequences are perceived as a single stream or as two separate streams when "A" and "B" tones activate the same or distinct frequency-tuned neuronal populations in primary auditory cortex (A1), respectively...
November 1, 2017: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
keyword
keyword
106287
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"