keyword
MENU ▼
Read by QxMD icon Read
search

Auditory scene recognition

keyword
https://www.readbyqxmd.com/read/28481270/smartphone-based-escalator-recognition-for-the-visually-impaired
#1
Daiki Nakamura, Hotaka Takizawa, Mayumi Aoyagi, Nobuo Ezaki, Shinji Mizuno
It is difficult for visually impaired individuals to recognize escalators in everyday environments. If the individuals ride on escalators in the wrong direction, they will stumble on the steps. This paper proposes a novel method to assist visually impaired individuals in finding available escalators by the use of smartphone cameras. Escalators are recognized by analyzing optical flows in video frames captured by the cameras, and auditory feedback is provided to the individuals. The proposed method was implemented on an Android smartphone and applied to actual escalator scenes...
May 6, 2017: Sensors
https://www.readbyqxmd.com/read/28343958/simultanagnosia-does-not-affect-processes-of-auditory-gestalt-perception
#2
Johannes Rennig, Anna Lena Bleyer, Hans-Otto Karnath
Simultanagnosia is a neuropsychological deficit of higher visual processes caused by temporo-parietal brain damage. It is characterized by a specific failure of recognition of a global visual Gestalt, like a visual scene or complex objects, consisting of local elements. In this study we investigated to what extend this deficit should be understood as a deficit related to specifically the visual domain or whether it should be seen as defective Gestalt processing per se. To examine if simultanagnosia occurs across sensory domains, we designed several auditory experiments sharing typical characteristics of visual tasks that are known to be particularly demanding for patients suffering from simultanagnosia...
March 23, 2017: Neuropsychologia
https://www.readbyqxmd.com/read/28238657/frogs-exploit-statistical-regularities-in-noisy-acoustic-scenes-to-solve-cocktail-party-like-problems
#3
Norman Lee, Jessica L Ward, Alejandro Vélez, Christophe Micheyl, Mark A Bee
Noise is a ubiquitous source of errors in all forms of communication [1]. Noise-induced errors in speech communication, for example, make it difficult for humans to converse in noisy social settings, a challenge aptly named the "cocktail party problem" [2]. Many nonhuman animals also communicate acoustically in noisy social groups and thus face biologically analogous problems [3]. However, we know little about how the perceptual systems of receivers are evolutionarily adapted to avoid the costs of noise-induced errors in communication...
March 6, 2017: Current Biology: CB
https://www.readbyqxmd.com/read/28044013/contributions-of-low-and-high-level-properties-to-neural-processing-of-visual-scenes-in-the-human-brain
#4
REVIEW
Iris I A Groen, Edward H Silson, Chris I Baker
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition...
February 19, 2017: Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences
https://www.readbyqxmd.com/read/28040021/spatially-separating-language-masker-from-target-results-in-spatial-and-linguistic-masking-release
#5
Navin Viswanathan, Kostas Kokkinakis, Brittany T Williams
Several studies demonstrate that in complex auditory scenes, speech recognition is improved when the competing background and target speech differ linguistically. However, such studies typically utilize spatially co-located speech sources which may not fully capture typical listening conditions. Furthermore, co-located presentation may overestimate the observed benefit of linguistic dissimilarity. The current study examines the effect of spatial separation on linguistic release from masking. Results demonstrate that linguistic release from masking does extend to spatially separated sources...
December 2016: Journal of the Acoustical Society of America
https://www.readbyqxmd.com/read/27802226/music-perception-in-dementia
#6
Hannah L Golden, Camilla N Clark, Jennifer M Nicholas, Miriam H Cohen, Catherine F Slattery, Ross W Paterson, Alexander J M Foulkes, Jonathan M Schott, Catherine J Mummery, Sebastian J Crutch, Jason D Warren
Despite much recent interest in music and dementia, music perception has not been widely studied across dementia syndromes using an information processing approach. Here we addressed this issue in a cohort of 30 patients representing major dementia syndromes of typical Alzheimer's disease (AD, n = 16), logopenic aphasia (LPA, an Alzheimer variant syndrome; n = 5), and progressive nonfluent aphasia (PNFA; n = 9) in relation to 19 healthy age-matched individuals. We designed a novel neuropsychological battery to assess perception of musical patterns in the dimensions of pitch and temporal information (requiring detection of notes that deviated from the established pattern based on local or global sequence features) and musical scene analysis (requiring detection of a familiar tune within polyphonic harmony)...
2017: Journal of Alzheimer's Disease: JAD
https://www.readbyqxmd.com/read/27708608/multimodal-integration-of-spatial-information-the-influence-of-object-related-factors-and-self-reported-strategies
#7
Harun Karimpur, Kai Hamburger
Spatial representations are a result of multisensory information integration. More recent findings suggest that the multisensory information processing of a scene can be facilitated when paired with a semantically congruent auditory signal. This congruency effect was taken as evidence that audio-visual integration occurs for complex scenes. As navigation in our environment consists of a seamless integration of complex sceneries, a fundamental question arises: how is human landmark-based wayfinding affected by multimodality? In order to address this question, two experiments were conducted in a virtual environment...
2016: Frontiers in Psychology
https://www.readbyqxmd.com/read/27698985/touching-and-hearing-unseen-objects-multisensory-effects-on-scene-recognition
#8
Simon J Hazenberg, Rob van Lier
In three experiments, we investigated the influence of object-specific sounds on haptic scene recognition without vision. Blindfolded participants had to recognize, through touch, spatial scenes comprising six objects that were placed on a round platform. Critically, in half of the trials, object-specific sounds were played when objects were touched (bimodal condition), while sounds were turned off in the other half of the trials (unimodal condition). After first exploring the scene, two objects were swapped and the task was to report, which of the objects swapped positions...
July 2016: I-Perception
https://www.readbyqxmd.com/read/27421185/eeg-signatures-accompanying-auditory-figure-ground-segregation
#9
Brigitta Tóth, Zsuzsanna Kocsis, Gábor P Háden, Ágnes Szerafin, Barbara G Shinn-Cunningham, István Winkler
In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased - i...
November 1, 2016: NeuroImage
https://www.readbyqxmd.com/read/27375416/attention-modulates-the-auditory-cortical-processing-of-spatial-and-category-cues-in-naturalistic-auditory-scenes
#10
Hanna Renvall, Noël Staeren, Claudia S Barz, Anke Ley, Elia Formisano
This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to...
2016: Frontiers in Neuroscience
https://www.readbyqxmd.com/read/27148000/sensory-substitution-the-spatial-updating-of-auditory-scenes-mimics-the-spatial-updating-of-visual-scenes
#11
Achille Pasqualotto, Tayfun Esenkaya
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition...
2016: Frontiers in Behavioral Neuroscience
https://www.readbyqxmd.com/read/26737828/ambiguity-involving-two-illusory-melodies-induced-by-a-simple-configuration-of-tones
#12
Iku Nemoto, Ryosuke Yuhara
Auditory scene analysis is essential in daily life to extract necessary information from complex acoustic environment and also from intricate development of music compositions. Auditory illusions and ambiguity are important factors in auditory scene analysis and have been studied extensively. We here report a novel form of ambiguity involving two illusory melodies implied by a very simple stimulus consisting of two sustained tones of different frequencies and an intermittently repeated tone of a frequency between the sustained tones...
2015: Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society
https://www.readbyqxmd.com/read/26528202/auditory-scene-analysis-and-sonified-visual-images-does-consonance-negatively-impact-on-object-formation-when-using-complex-sonified-stimuli
#13
David J Brown, Andrew J R Simpson, Michael J Proulx
A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener...
2015: Frontiers in Psychology
https://www.readbyqxmd.com/read/26482863/towards-a-computational-comparative-neuroprimatology-framing-the-language-ready-brain
#14
REVIEW
Michael A Arbib
We make the case for developing a Computational Comparative Neuroprimatology to inform the analysis of the function and evolution of the human brain. First, we update the mirror system hypothesis on the evolution of the language-ready brain by (i) modeling action and action recognition and opportunistic scheduling of macaque brains to hypothesize the nature of the last common ancestor of macaque and human (LCA-m); and then we (ii) introduce dynamic brain modeling to show how apes could acquire gesture through ontogenetic ritualization, hypothesizing the nature of evolution from LCA-m to the last common ancestor of chimpanzee and human (LCA-c)...
March 2016: Physics of Life Reviews
https://www.readbyqxmd.com/read/26474981/cross-modal-informational-masking-of-lipreading-by-babble
#15
Joel Myerson, Brent Spehar, Nancy Tye-Murray, Kristin Van Engen, Sandra Hale, Mitchell S Sommers
Whereas the energetic and informational masking effects of unintelligible babble on auditory speech recognition are well established, the present study is the first to investigate its effects on visual speech recognition. Young and older adults performed two lipreading tasks while simultaneously experiencing either quiet, speech-shaped noise, or 6-talker background babble. Both words at the end of uninformative carrier sentences and key words in everyday sentences were harder to lipread in the presence of babble than in the presence of speech-shaped noise or quiet...
January 2016: Attention, Perception & Psychophysics
https://www.readbyqxmd.com/read/26388721/sound-stream-segregation-a-neuromorphic-approach-to-solve-the-cocktail-party-problem-in-real-time
#16
Chetan Singh Thakur, Runchun M Wang, Saeed Afshar, Tara J Hamilton, Jonathan C Tapson, Shihab A Shamma, André van Schaik
The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA)...
2015: Frontiers in Neuroscience
https://www.readbyqxmd.com/read/26161679/how-age-linguistic-status-and-the-nature-of-the-auditory-scene-alter-the-manner-in-which-listening-comprehension-is-achieved-in-multitalker-conversations
#17
Meital Avivi-Reich, Agnes Jakubczyk, Meredyth Daneman, Bruce A Schneider
PURPOSE: We investigated how age and linguistic status affected listeners' ability to follow and comprehend 3-talker conversations, and the extent to which individual differences in language proficiency predict speech comprehension under difficult listening conditions. METHOD: Younger and older L1s as well as young L2s listened to 3-talker conversations, with or without spatial separation between talkers, in either quiet or against moderate or high 12-talker babble background, and were asked to answer questions regarding their contents...
October 2015: Journal of Speech, Language, and Hearing Research: JSLHR
https://www.readbyqxmd.com/read/25848684/rapid-context-based-identification-of-target-sounds-in-an-auditory-scene
#18
Marissa L Gamble, Marty G Woldorff
To make sense of our dynamic and complex auditory environment, we must be able to parse the sensory input into usable parts and pick out relevant sounds from all the potentially distracting auditory information. Although it is unclear exactly how we accomplish this difficult task, Gamble and Woldorff [Gamble, M. L., & Woldorff, M. G. The temporal cascade of neural processes underlying target detection and attentional processing during auditory search. Cerebral Cortex (New York, N.Y.: 1991), 2014] recently reported an ERP study of an auditory target-search task in a temporally and spatially distributed, rapidly presented, auditory scene...
September 2015: Journal of Cognitive Neuroscience
https://www.readbyqxmd.com/read/25621537/association-of-auditory-verbal-and-visual-hallucinations-with-impaired-and-improved-recognition-of-colored-pictures
#19
Gildas Brébion, Christian Stephan-Otto, Judith Usall, Elena Huerta-Ramos, Mireia Perez del Olmo, Jorge Cuevas-Esteban, Josep Maria Haro, Susana Ochoa
OBJECTIVE: A number of cognitive underpinnings of auditory hallucinations have been established in schizophrenia patients, but few have, as yet, been uncovered for visual hallucinations. In previous research, we unexpectedly observed that auditory hallucinations were associated with poor recognition of color, but not black-and-white (b/w), pictures. In this study, we attempted to replicate and explain this finding. Potential associations with visual hallucinations were explored. METHOD: B/w and color pictures were presented to 50 schizophrenia patients and 45 healthy individuals under 2 conditions of visual context presentation corresponding to 2 levels of visual encoding complexity...
September 2015: Neuropsychology
https://www.readbyqxmd.com/read/25552318/clutter-and-conspecifics-a-comparison-of-their-influence-on-echolocation-and-flight-behaviour-in-daubenton-s-bat-myotis-daubentonii
#20
COMPARATIVE STUDY
Kayleigh Fawcett, John M Ratcliffe
We compared the influence of conspecifics and clutter on echolocation and flight speed in the bat Myotis daubentonii. In a large room, actual pairs of bats exhibited greater disparity in peak frequency (PF), minimum frequency (F MIN) and call period compared to virtual pairs of bats, each flying alone. Greater inter-individual disparity in PF and F MIN may reduce acoustic interference and/or increase signal self-recognition in the presence of conspecifics. Bats flying alone in a smaller flight room, to simulate a more cluttered habitat as compared to the large flight room, produced calls of shorter duration and call period, lower intensity, and flew at lower speeds...
March 2015: Journal of Comparative Physiology. A, Neuroethology, Sensory, Neural, and Behavioral Physiology
keyword
keyword
106502
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"