Read by QxMD icon Read

Automatic visual word recognition

A R Weighall, L M Henderson, D J Barr, S A Cairney, M G Gaskell
Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method - the visual world paradigm - consistently show competition without a delay. We trained 42 adults and 40 children (aged 7-8) on novel word-object pairings, and employed this paradigm to measure the time-course of lexical competition. Fixations to novel objects upon hearing existing words (e...
August 22, 2016: Brain and Language
Alejandro Rituerto, Henrik Andreasson, Ana C Murillo, Achim Lilienthal, José Jesús Guerrero
Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW) or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words...
2016: Sensors
Javier García-Orza, Montserrat Comesaña, Ana Piñeiro, Ana Paula Soares, Manuel Perea
Recent research has shown that leet words (i.e., words in which some of the letters are replaced by visually similar digits; e.g., VIRTU4L) can be processed as their base words without much cost. However, it remains unclear whether the digits inserted in leet words are simply processed as letters or whether they are simultaneously processed as numbers (i.e., in terms of access to their quantity representation). To address this question, we conducted 2 experiments that examined the size congruity effect (i.e...
June 2016: Journal of Experimental Psychology. Learning, Memory, and Cognition
Adam Stone, Geo Kartheiser, Peter C Hauser, Laura-Ann Petitto, Thomas E Allen
Studies have shown that American Sign Language (ASL) fluency has a positive impact on deaf individuals' English reading, but the cognitive and cross-linguistic mechanisms permitting the mapping of a visual-manual language onto a sound-based language have yet to be elucidated. Fingerspelling, which represents English orthography with 26 distinct hand configurations, is an integral part of ASL and has been suggested to provide deaf bilinguals with important cross-linguistic links between sign language and orthography...
2015: PloS One
Otávio A B Penatti, Rafael de O Werneck, Waldir R de Almeida, Bernardo V Stein, Daniel V Pazinato, Pedro R Mendes Júnior, Ricardo da S Torres, Anderson Rocha
In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes...
November 1, 2015: Computers in Biology and Medicine
Stefan Mathe, Cristian Sminchisescu
Systems based on bag-of-words models from image features collected at maxima of sparse interest point operators have been used successfully for both computer visual object and action recognition tasks. While the sparse, interest-point based approach to recognition is not inconsistent with visual processing in biological systems that operate in `saccade and fixate' regimes, the methodology and emphasis in the human and the computer vision communities remains sharply distinct. Here, we make three contributions aiming to bridge this gap...
July 2015: IEEE Transactions on Pattern Analysis and Machine Intelligence
Franck-Emmanuel Roux, Krasimir Miskin, Jean-Baptiste Durand, Oumar Sacko, Emilie Réhault, Rositsa Tanova, Jean-François Démonet
In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks...
October 2015: Cortex; a Journal Devoted to the Study of the Nervous System and Behavior
Kandan Ramakrishnan, Steven Scholte, Victor Lamme, Arnold Smeulders, Sennay Ghebreab
Biologically inspired computational models replicate the hierarchical visual processing in the human ventral stream. One such recent model, Convolutional Neural Network (CNN) has achieved state of the art performance on automatic visual recognition tasks. The CNN architecture contains successive layers of convolution and pooling, and resembles the simple and complex cell hierarchy as proposed by Hubel and Wiesel. This makes it a candidate model to test against the human brain. In this study we look at 1) where in the brain different layers of the CNN account for brain responses, and 2) how the CNN network compares against existing and widely used hierarchical vision models such as Bag-of-Words (BoW) and HMAX...
2015: Journal of Vision
Sarah Schuster, Stefan Hawelka, Fabio Richlan, Philipp Ludersdorfer, Florian Hutzler
The predominant finding of studies assessing the response of the left ventral occipito-temporal cortex (vOT) to familiar words and to unfamiliar, but pronounceable letter strings (pseudowords) is higher activation for pseudowords. One explanation for this finding is that readers automatically generate predictions about a letter string's identity - pseudowords mismatch these predictions and the higher vOT activation is interpreted as reflecting the resultant prediction errors. The majority of studies, however, administered tasks which imposed demands above and beyond the intrinsic requirements of visual word recognition...
2015: Scientific Reports
Yasmine Probst, Duc Thanh Nguyen, Minh Khoi Tran, Wanqing Li
Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment...
August 2015: Nutrients
Olfa Ben Ahmed, Maxim Mizotin, Jenny Benois-Pineau, Michèle Allard, Gwénaëlle Catheline, Chokri Ben Amar
Recently, several pattern recognition methods have been proposed to automatically discriminate between patients with and without Alzheimer's disease using different imaging modalities: sMRI, fMRI, PET and SPECT. Classical approaches in visual information retrieval have been successfully used for analysis of structural MRI brain images. In this paper, we use the visual indexing framework and pattern recognition analysis based on structural MRI data to discriminate three classes of subjects: normal controls (NC), mild cognitive impairment (MCI) and Alzheimer's disease (AD)...
September 2015: Computerized Medical Imaging and Graphics: the Official Journal of the Computerized Medical Imaging Society
Degao Li, Kejuan Gao, Xueyun Wu, Ying Xong, Xiaojun Chen, Weiwei He, Ling Li, Jingjia Huang
Two experiments investigated Chinese deaf and hard of hearing (DHH) adolescents' recognition of category names in an innovative task of semantic categorization. In each trial, the category-name target appeared briefly at the screen center followed by two words or two pictures for two basic-level exemplars of high or middle typicality, which appeared briefly approximately where the target had appeared. Participants' reaction times when they were deciding whether the target referred to living or nonliving things consistently revealed the typicality effect for the word, but a reversed-typicality effect for picture-presented exemplars...
2015: American Annals of the Deaf
Ricardo Lopes, Rita Gouveia Nunes, Mário Rodrigues Simões, Mário Forjaz Secca, Alberto Leal
Automatic recognition of words from letter strings is a critical processing step in reading that is lateralized to the left-hemisphere middle fusiform gyrus in the so-called Visual Word Form Area (VWFA). Surgical lesions in this location can lead to irreversible alexia. Very early left hemispheric lesions can lead to transfer of the VWFA to the nondominant hemisphere, but it is currently unknown if this capability is preserved in epilepsies developing after reading acquisition. In this study, we aimed to determine the lateralization of the VWFA in late-onset left inferior occipital lobe epilepsies and also the effect of surgical disconnection from the adjacent secondary visual areas...
May 2015: Epilepsy & Behavior: E&B
Yuanyuan Chen, Matthew H Davis, Friedemann Pulvermüller, Olaf Hauk
Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision...
September 2015: Journal of Cognitive Neuroscience
Andru P Twinanda, Emre O Alkan, Afshin Gangi, Michel de Mathelin, Nicolas Padoy
PURPOSE: Context-aware systems for the operating room (OR) provide the possibility to significantly improve surgical workflow through various applications such as efficient OR scheduling, context-sensitive user interfaces, and automatic transcription of medical procedures. Being an essential element of such a system, surgical action recognition is thus an important research area. In this paper, we tackle the problem of classifying surgical actions from video clips that capture the activities taking place in the OR...
June 2015: International Journal of Computer Assisted Radiology and Surgery
Kristof Strijkers, Daisy Bertrand, Jonathan Grainger
We investigated how linguistic intention affects the time course of visual word recognition by comparing the brain's electrophysiological response to a word's lexical frequency, a well-established psycholinguistic marker of lexical access, when participants actively retrieve the meaning of the written input (semantic categorization) versus a situation where no language processing is necessary (ink color categorization). In the semantic task, the ERPs elicited by high-frequency words started to diverge from those elicited by low-frequency words as early as 120 msec after stimulus onset...
August 2015: Journal of Cognitive Neuroscience
Elsa M Labuschagne, Derek Besner
It is widely accepted that the presentation of a printed word "automatically" triggers processing that ends with full semantic activation. This processing, among other characteristics, is held to occur without intention, and cannot be stopped. The results of the present experiment show that this account is problematic in the context of a variant of the Stroop paradigm. Subjects named the print color of words that were either neutral or semantically related to color. When the letters were all colored, all spatially cued, and the spaces between letters were filled with characters from the top of the keyboard (i...
2015: Frontiers in Psychology
Kandan Ramakrishnan, H Steven Scholte, Iris I A Groen, Arnold W M Smeulders, Sennay Ghebreab
The human visual system is assumed to transform low level visual features to object and scene representations via features of intermediate complexity. How the brain computationally represents intermediate features is still unclear. To further elucidate this, we compared the biologically plausible HMAX model and Bag of Words (BoW) model from computer vision. Both these computational models use visual dictionaries, candidate features of intermediate complexity, to represent visual scenes, and the models have been proven effective in automatic object and scene recognition...
2014: Frontiers in Computational Neuroscience
Tomoka Kobayashi, Masumi Inagaki, Hiroko Yamazaki, Yosuke Kita, Makiko Kaga, Akira Oka
OBJECTIVE: Developmental dyslexia (DD) is a neurodevelopmental disorder that is characterized by difficulties with accurate and/or fluent word recognition and by poor spelling and decoding abilities. The magnocellular deficit theory is one of several hypotheses that have been proposed to explain the pathophysiology of DD. In this study, we investigated magnocellular system dysfunction in Japanese dyslexic children. METHODS: Subjects were 19 dyslexic children (DD group) and 19 aged-matched healthy children (TD group)...
November 2014: No to Hattatsu. Brain and Development
Suzanne E Welcome, Adrian Pasquarella, Xi Chen, David R Olson, Marc F Joanisse
Previous functional imaging studies have highlighted the role of left ventral temporal cortex in processing written word forms. We explored activation and anatomical connectivity of this region in HE, a professional writer with alexia as a result of stroke affecting primarily white matter in the left inferior temporal lobe. We used a one-back visual recognition task and functional Magnetic Resonance Imaging to elicit automatic activation to various orthographic and non-orthographic stimuli. Surprisingly, HE showed cortical activation in the left mid-fusiform area during the presentation of words and word-like stimuli, suggesting that this region׳s role in processing visual words is intact despite his severely impaired reading...
December 2014: Neuropsychologia
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"