Read by QxMD icon Read

Auditory scene classification

Dana Barniv, Israel Nelken
When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases...
2015: PloS One
Brian J Malone, Brian H Scott, Malcolm N Semple
The temporal coherence of amplitude fluctuations is a critical cue for segmentation of complex auditory scenes. The auditory system must accurately demarcate the onsets and offsets of acoustic signals. We explored how and how well the timing of onsets and offsets of gated tones are encoded by auditory cortical neurons in awake rhesus macaques. Temporal features of this representation were isolated by presenting otherwise identical pure tones of differing durations. Cortical response patterns were diverse, including selective encoding of onset and offset transients, tonic firing, and sustained suppression...
April 1, 2015: Journal of Neurophysiology
Inyong Choi, Siddharth Rajaram, Lenny A Varghese, Barbara G Shinn-Cunningham
Selective auditory attention is essential for human listeners to be able to communicate in multi-source environments. Selective attention is known to modulate the neural representation of the auditory scene, boosting the representation of a target sound relative to the background, but the strength of this modulation, and the mechanisms contributing to it, are not well understood. Here, listeners performed a behavioral experiment demanding sustained, focused spatial auditory attention while we measured cortical responses using electroencephalography (EEG)...
2013: Frontiers in Human Neuroscience
Kun Han, DeLiang Wang
A key problem in computational auditory scene analysis (CASA) is monaural speech segregation, which has proven to be very challenging. For monaural mixtures, one can only utilize the intrinsic properties of speech or interference to segregate target speech from background noise. Ideal binary mask (IBM) has been proposed as a main goal of sound segregation in CASA and has led to substantial improvements of human speech intelligibility in noise. This study proposes a classification approach to estimate the IBM and employs support vector machines to classify time-frequency units as either target- or interference-dominant...
November 2012: Journal of the Acoustical Society of America
Jackson C Liang, Anthony D Wagner, Alison R Preston
Current theories of medial temporal lobe (MTL) function focus on event content as an important organizational principle that differentiates MTL subregions. Perirhinal and parahippocampal cortices may play content-specific roles in memory, whereas hippocampal processing is alternately hypothesized to be content specific or content general. Despite anatomical evidence for content-specific MTL pathways, empirical data for content-based MTL subregional dissociations are mixed. Here, we combined functional magnetic resonance imaging with multiple statistical approaches to characterize MTL subregional responses to different classes of novel event content (faces, scenes, spoken words, sounds, visual words)...
January 2013: Cerebral Cortex
B S Kasper, E M Kasper, E Pauli, H Stefan
In partial epilepsy, a localized hypersynchronous neuronal discharge evolving into a partial seizure affecting a particular cortical region or cerebral subsystem can give rise to subjective symptoms, which are perceived by the affected person only, that is, ictal hallucinations, illusions, or delusions. When forming the beginning of a symptom sequence leading to impairment of consciousness and/or a classic generalized seizure, these phenomena are referred to as an epileptic aura, but they also occur in isolation...
May 2010: Epilepsy & Behavior: E&B
Guoning Hu, DeLiang Wang
Monaural speech segregation has proven to be extremely challenging. While efforts in computational auditory scene analysis have led to considerable progress in voiced speech segregation, little attention has been given to unvoiced speech, which lacks harmonic structure and has weaker energy, hence more susceptible to interference. This study proposes a new approach to the problem of segregating unvoiced speech from nonspeech interference. The study first addresses the question of how much speech is unvoiced...
August 2008: Journal of the Acoustical Society of America
James A Simmons, Nicola Neretti, Nathan Intrator, Richard A Altes, Michael J Ferragamo, Mark I Sanderson
Big brown bats (Eptesicus fuscus) emit wideband, frequency-modulated biosonar sounds and perceive the distance to objects from the delay of echoes. Bats remember delays and patterns of delay from one broadcast to the next, and they may rely on delays to perceive target scenes. While emitting a series of broadcasts, they can detect very small changes in delay based on their estimates of delay for successive echoes, which are derived from an auditory time/frequency representation of frequency-modulated sounds...
March 9, 2004: Proceedings of the National Academy of Sciences of the United States of America
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"