Read by QxMD icon Read

Auditory scene recognition

Harun Karimpur, Kai Hamburger
Spatial representations are a result of multisensory information integration. More recent findings suggest that the multisensory information processing of a scene can be facilitated when paired with a semantically congruent auditory signal. This congruency effect was taken as evidence that audio-visual integration occurs for complex scenes. As navigation in our environment consists of a seamless integration of complex sceneries, a fundamental question arises: how is human landmark-based wayfinding affected by multimodality? In order to address this question, two experiments were conducted in a virtual environment...
2016: Frontiers in Psychology
Simon J Hazenberg, Rob van Lier
In three experiments, we investigated the influence of object-specific sounds on haptic scene recognition without vision. Blindfolded participants had to recognize, through touch, spatial scenes comprising six objects that were placed on a round platform. Critically, in half of the trials, object-specific sounds were played when objects were touched (bimodal condition), while sounds were turned off in the other half of the trials (unimodal condition). After first exploring the scene, two objects were swapped and the task was to report, which of the objects swapped positions...
July 2016: I-Perception
Brigitta Tóth, Zsuzsanna Kocsis, Gábor P Háden, Ágnes Szerafin, Barbara G Shinn-Cunningham, István Winkler
In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased - i...
November 1, 2016: NeuroImage
Hanna Renvall, Noël Staeren, Claudia S Barz, Anke Ley, Elia Formisano
This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to...
2016: Frontiers in Neuroscience
Achille Pasqualotto, Tayfun Esenkaya
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition...
2016: Frontiers in Behavioral Neuroscience
Iku Nemoto, Ryosuke Yuhara
Auditory scene analysis is essential in daily life to extract necessary information from complex acoustic environment and also from intricate development of music compositions. Auditory illusions and ambiguity are important factors in auditory scene analysis and have been studied extensively. We here report a novel form of ambiguity involving two illusory melodies implied by a very simple stimulus consisting of two sustained tones of different frequencies and an intermittently repeated tone of a frequency between the sustained tones...
2015: Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society
David J Brown, Andrew J R Simpson, Michael J Proulx
A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener...
2015: Frontiers in Psychology
Michael A Arbib
We make the case for developing a Computational Comparative Neuroprimatology to inform the analysis of the function and evolution of the human brain. First, we update the mirror system hypothesis on the evolution of the language-ready brain by (i) modeling action and action recognition and opportunistic scheduling of macaque brains to hypothesize the nature of the last common ancestor of macaque and human (LCA-m); and then we (ii) introduce dynamic brain modeling to show how apes could acquire gesture through ontogenetic ritualization, hypothesizing the nature of evolution from LCA-m to the last common ancestor of chimpanzee and human (LCA-c)...
March 2016: Physics of Life Reviews
Joel Myerson, Brent Spehar, Nancy Tye-Murray, Kristin Van Engen, Sandra Hale, Mitchell S Sommers
Whereas the energetic and informational masking effects of unintelligible babble on auditory speech recognition are well established, the present study is the first to investigate its effects on visual speech recognition. Young and older adults performed two lipreading tasks while simultaneously experiencing either quiet, speech-shaped noise, or 6-talker background babble. Both words at the end of uninformative carrier sentences and key words in everyday sentences were harder to lipread in the presence of babble than in the presence of speech-shaped noise or quiet...
January 2016: Attention, Perception & Psychophysics
Chetan Singh Thakur, Runchun M Wang, Saeed Afshar, Tara J Hamilton, Jonathan C Tapson, Shihab A Shamma, André van Schaik
The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA)...
2015: Frontiers in Neuroscience
Meital Avivi-Reich, Agnes Jakubczyk, Meredyth Daneman, Bruce A Schneider
PURPOSE: We investigated how age and linguistic status affected listeners' ability to follow and comprehend 3-talker conversations, and the extent to which individual differences in language proficiency predict speech comprehension under difficult listening conditions. METHOD: Younger and older L1s as well as young L2s listened to 3-talker conversations, with or without spatial separation between talkers, in either quiet or against moderate or high 12-talker babble background, and were asked to answer questions regarding their contents...
October 2015: Journal of Speech, Language, and Hearing Research: JSLHR
Marissa L Gamble, Marty G Woldorff
To make sense of our dynamic and complex auditory environment, we must be able to parse the sensory input into usable parts and pick out relevant sounds from all the potentially distracting auditory information. Although it is unclear exactly how we accomplish this difficult task, Gamble and Woldorff [Gamble, M. L., & Woldorff, M. G. The temporal cascade of neural processes underlying target detection and attentional processing during auditory search. Cerebral Cortex (New York, N.Y.: 1991), 2014] recently reported an ERP study of an auditory target-search task in a temporally and spatially distributed, rapidly presented, auditory scene...
September 2015: Journal of Cognitive Neuroscience
Gildas Brébion, Christian Stephan-Otto, Judith Usall, Elena Huerta-Ramos, Mireia Perez del Olmo, Jorge Cuevas-Esteban, Josep Maria Haro, Susana Ochoa
OBJECTIVE: A number of cognitive underpinnings of auditory hallucinations have been established in schizophrenia patients, but few have, as yet, been uncovered for visual hallucinations. In previous research, we unexpectedly observed that auditory hallucinations were associated with poor recognition of color, but not black-and-white (b/w), pictures. In this study, we attempted to replicate and explain this finding. Potential associations with visual hallucinations were explored. METHOD: B/w and color pictures were presented to 50 schizophrenia patients and 45 healthy individuals under 2 conditions of visual context presentation corresponding to 2 levels of visual encoding complexity...
September 2015: Neuropsychology
Kayleigh Fawcett, John M Ratcliffe
We compared the influence of conspecifics and clutter on echolocation and flight speed in the bat Myotis daubentonii. In a large room, actual pairs of bats exhibited greater disparity in peak frequency (PF), minimum frequency (F MIN) and call period compared to virtual pairs of bats, each flying alone. Greater inter-individual disparity in PF and F MIN may reduce acoustic interference and/or increase signal self-recognition in the presence of conspecifics. Bats flying alone in a smaller flight room, to simulate a more cluttered habitat as compared to the large flight room, produced calls of shorter duration and call period, lower intensity, and flew at lower speeds...
March 2015: Journal of Comparative Physiology. A, Neuroethology, Sensory, Neural, and Behavioral Physiology
Allison Ponzio, Mara Mather
Enhanced memory for emotional items often comes at the cost of memory for the background scenes. Because emotional foreground items both induce arousal and attract attention, it is not clear whether the emotion effects are simply the result of shifts in visual attention during encoding or whether arousal has effects beyond simple attention capture. In the current study, participants viewed a series of scenes that each either had a foreground object or did not have one, and then, after each image, heard either an emotionally arousing negative sound or a neutral sound...
December 2014: Emotion
Yonatan I Fishman, Mitchell Steinschneider, Christophe Micheyl
The ability to attend to a particular sound in a noisy environment is an essential aspect of hearing. To accomplish this feat, the auditory system must segregate sounds that overlap in frequency and time. Many natural sounds, such as human voices, consist of harmonics of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0. A difference in pitch between simultaneous HCTs provides a powerful cue for their segregation. The neural mechanisms underlying concurrent sound segregation based on pitch differences are poorly understood...
September 10, 2014: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
Melissa K Gregg, Vanessa C Irsik, Joel S Snyder
Change deafness is the failure to notice changes in an auditory scene. In this study, we sought to determine if change deafness is a perceptual error, rather than only a reflection of verbal memory limitations. We also examined how successful encoding of objects within a scene is related to successful detection of changes. Event-related potentials (ERPs) were recorded while listeners completed a change-detection and an object-encoding task with scenes composed of recognizable sounds or unrecognizable temporally scrambled versions of the recognizable sounds...
August 2014: Neuropsychologia
Markus Huff, Tino G K Meitz, Frank Papenmeier
Humans understand text and film by mentally representing their contents in situation models. These describe situations using dimensions like time, location, protagonist, and action. Changes in 1 or more dimensions (e.g., a new character enters the scene) cause discontinuities in the story line and are often perceived as boundaries between 2 meaningful units. Recent theoretical advances in event perception led to the assumption that situation models are represented in the form of event models in working memory...
September 2014: Journal of Experimental Psychology. Learning, Memory, and Cognition
Giles Hamilton-Fletcher, Jamie Ward
Visual sensory substitution devices (SSDs) allow visually-deprived individuals to navigate and recognise the 'visual world'; SSDs also provide opportunities for psychologists to study modality-independent theories of perception. At present most research has focused on encoding greyscale vision. However at the low spatial resolutions received by SSD users, colour information enhances object-ground segmentation, and provides more stable cues for scene and object recognition. Many attempts have been made to encode colour information in tactile or auditory modalities, but many of these studies exist in isolation...
2013: Multisensory Research
Ramanathan Subramanian, Divya Shankar, Nicu Sebe, David Melcher
A basic question in vision research regards where people look in complex scenes and how this influences their performance in various tasks. Previous studies with static images have demonstrated a close link between where people look and what they remember. Here, we examined the pattern of eye movements when participants watched neutral and emotional clips from Hollywood-style movies. Participants answered multiple-choice memory questions concerning visual and auditory scene details immediately upon viewing 1-min-long neutral or emotional movie clips...
2014: Journal of Vision
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"