Read by QxMD icon Read

Automatic visual word recognition

Lorie-Marlène Brault Foisy, Emmanuel Ahr, Steve Masson, Olivier Houdé, Grégoire Borst
Children tend to confuse reversible letters such as b and d when they start learning to read. According to some authors, mirror errors are a consequence of the mirror generalization (MG) process that allows one to recognize objects independently of their left-right orientation. Although MG is advantageous for the visual recognition of objects, it is detrimental for the visual recognition of reversible letters. Previous studies comparing novice and expert readers demonstrated that MG must be inhibited to discriminate reversible single letters...
June 19, 2017: Journal of Experimental Child Psychology
Fang Wang, Urs Maurer
Fast neural tuning to print has been found within the first 250ms of stimulus processing across different writing systems, indicated by larger N1 negativity in the ERP to words (or characters) compared to control stimuli, such as symbols. However, whether print tuning effects can be modulated by task demands at early stages of visual word recognition is still under debate. To further explore this issue, an ERP study in Chinese was conducted. Familiar, high-frequency, left/right-structured Chinese characters and unfamiliar, stroke number-matched symbols (Korean characters) were used as stimulus conditions...
May 30, 2017: Neuropsychologia
Beverly Wolf, Robert D Abbott, Virginia W Berninger
In Study 1, the treatment group (N = 33 first graders, M = 6 years 10 months, 16 girls) received Slingerland multi-modal (auditory, visual, tactile, motor through hand, and motor through mouth) manuscript (unjoined) handwriting instruction embedded in systematic spelling, reading, and composing lessons; and the control group (N =16 first graders, M = 7 years 1 month, 7 girls) received manuscript handwriting instruction not systematically related to the other literacy activities. ANOVA showed both groups improved on automatic alphabet writing from memory; but ANCOVA with the automatic alphabet writing task as covariate showed that the treatment group improved significantly more than control group from the second to ninth month of first grade on dictated spelling and recognition of word-specific spellings among phonological foils...
February 2017: Reading and Writing
Chotiga Pattamadilok, Valérie Chanoine, Christophe Pallier, Jean-Luc Anton, Bruno Nazarian, Pascal Belin, Johannes C Ziegler
Reading involves activation of phonological and semantic knowledge. Yet, the automaticity of the activation of these representations remains subject to debate. The present study addressed this issue by examining how different brain areas involved in language processing responded to a manipulation of bottom-up (level of visibility) and top-down information (task demands) applied to written words. The analyses showed that the same brain areas were activated in response to written words whether the task was symbol detection, rime detection, or semantic judgment...
February 3, 2017: NeuroImage
Ryuichi Doizaki, Junji Watanabe, Maki Sakamoto
Several pairs of Japanese adjective words pertaining to a material's properties, such as roughness and hardness, have been used in Japanese studies to quantitatively evaluate variations in tactile sensations. This method asks observers to analyze their perceptual experiences one by one. An alternative notion is that human perceptual recognition is performed as a whole rather than by using fragmented factors. Based on this notion, we propose a system that can automatically estimate multidimensional ratings of touch from a single sound-symbolic word that has been spontaneously and intuitively expressed by a user...
October 18, 2016: IEEE Transactions on Haptics
Holger Mitterer, Eva Reinisch
Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker...
November 30, 2016: Attention, Perception & Psychophysics
Karinne Sauval, Laetitia Perre, Lynne G Duncan, Eva Marinus, Séverine Casalis
Previous masked priming research has shown automatic phonological activation during visual word recognition in monolingual skilled adult readers. Activation also occurs across languages in bilingual adult readers, suggesting that the activation of phonological representations is not language specific. Less is known about developing readers. First, it is unclear whether there is automatic phonological activation during visual word recognition among children in general. Second, no empirical data exist on whether the activation of phonological representations is language specific or not in bilingual children...
February 2017: Journal of Experimental Child Psychology
Ryuichi Doizaki, Junji Watanabe, Maki Sakamoto
Several pairs of Japanese adjective words pertaining to a material's properties, such as roughness and hardness, have been used in Japanese studies to quantitatively evaluate variations in tactile sensations. This method asks observers to analyze their perceptual experiences one by one. An alternative notion is that human perceptual recognition is performed as a whole rather than by using fragmented factors. Based on this notion, we propose a system that can automatically estimate multidimensional ratings of touch from a single sound-symbolic word that has been spontaneously and intuitively expressed by a user...
October 18, 2016: IEEE Transactions on Haptics
A R Weighall, L M Henderson, D J Barr, S A Cairney, M G Gaskell
Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method - the visual world paradigm - consistently show competition without a delay. We trained 42 adults and 40 children (aged 7-8) on novel word-object pairings, and employed this paradigm to measure the time-course of lexical competition. Fixations to novel objects upon hearing existing words (e...
April 2017: Brain and Language
Alejandro Rituerto, Henrik Andreasson, Ana C Murillo, Achim Lilienthal, José Jesús Guerrero
Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW) or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words...
2016: Sensors
Javier García-Orza, Montserrat Comesaña, Ana Piñeiro, Ana Paula Soares, Manuel Perea
Recent research has shown that leet words (i.e., words in which some of the letters are replaced by visually similar digits; e.g., VIRTU4L) can be processed as their base words without much cost. However, it remains unclear whether the digits inserted in leet words are simply processed as letters or whether they are simultaneously processed as numbers (i.e., in terms of access to their quantity representation). To address this question, we conducted 2 experiments that examined the size congruity effect (i.e...
June 2016: Journal of Experimental Psychology. Learning, Memory, and Cognition
Adam Stone, Geo Kartheiser, Peter C Hauser, Laura-Ann Petitto, Thomas E Allen
Studies have shown that American Sign Language (ASL) fluency has a positive impact on deaf individuals' English reading, but the cognitive and cross-linguistic mechanisms permitting the mapping of a visual-manual language onto a sound-based language have yet to be elucidated. Fingerspelling, which represents English orthography with 26 distinct hand configurations, is an integral part of ASL and has been suggested to provide deaf bilinguals with important cross-linguistic links between sign language and orthography...
2015: PloS One
Otávio A B Penatti, Rafael de O Werneck, Waldir R de Almeida, Bernardo V Stein, Daniel V Pazinato, Pedro R Mendes Júnior, Ricardo da S Torres, Anderson Rocha
In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes...
November 1, 2015: Computers in Biology and Medicine
Stefan Mathe, Cristian Sminchisescu
Systems based on bag-of-words models from image features collected at maxima of sparse interest point operators have been used successfully for both computer visual object and action recognition tasks. While the sparse, interest-point based approach to recognition is not inconsistent with visual processing in biological systems that operate in `saccade and fixate' regimes, the methodology and emphasis in the human and the computer vision communities remains sharply distinct. Here, we make three contributions aiming to bridge this gap...
July 2015: IEEE Transactions on Pattern Analysis and Machine Intelligence
Franck-Emmanuel Roux, Krasimir Miskin, Jean-Baptiste Durand, Oumar Sacko, Emilie Réhault, Rositsa Tanova, Jean-François Démonet
In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks...
October 2015: Cortex; a Journal Devoted to the Study of the Nervous System and Behavior
Kandan Ramakrishnan, Steven Scholte, Victor Lamme, Arnold Smeulders, Sennay Ghebreab
Biologically inspired computational models replicate the hierarchical visual processing in the human ventral stream. One such recent model, Convolutional Neural Network (CNN) has achieved state of the art performance on automatic visual recognition tasks. The CNN architecture contains successive layers of convolution and pooling, and resembles the simple and complex cell hierarchy as proposed by Hubel and Wiesel. This makes it a candidate model to test against the human brain. In this study we look at 1) where in the brain different layers of the CNN account for brain responses, and 2) how the CNN network compares against existing and widely used hierarchical vision models such as Bag-of-Words (BoW) and HMAX...
2015: Journal of Vision
Sarah Schuster, Stefan Hawelka, Fabio Richlan, Philipp Ludersdorfer, Florian Hutzler
The predominant finding of studies assessing the response of the left ventral occipito-temporal cortex (vOT) to familiar words and to unfamiliar, but pronounceable letter strings (pseudowords) is higher activation for pseudowords. One explanation for this finding is that readers automatically generate predictions about a letter string's identity - pseudowords mismatch these predictions and the higher vOT activation is interpreted as reflecting the resultant prediction errors. The majority of studies, however, administered tasks which imposed demands above and beyond the intrinsic requirements of visual word recognition...
August 3, 2015: Scientific Reports
Yasmine Probst, Duc Thanh Nguyen, Minh Khoi Tran, Wanqing Li
Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment...
August 2015: Nutrients
Olfa Ben Ahmed, Maxim Mizotin, Jenny Benois-Pineau, Michèle Allard, Gwénaëlle Catheline, Chokri Ben Amar
Recently, several pattern recognition methods have been proposed to automatically discriminate between patients with and without Alzheimer's disease using different imaging modalities: sMRI, fMRI, PET and SPECT. Classical approaches in visual information retrieval have been successfully used for analysis of structural MRI brain images. In this paper, we use the visual indexing framework and pattern recognition analysis based on structural MRI data to discriminate three classes of subjects: normal controls (NC), mild cognitive impairment (MCI) and Alzheimer's disease (AD)...
September 2015: Computerized Medical Imaging and Graphics: the Official Journal of the Computerized Medical Imaging Society
Degao Li, Kejuan Gao, Xueyun Wu, Ying Xong, Xiaojun Chen, Weiwei He, Ling Li, Jingjia Huang
Two experiments investigated Chinese deaf and hard of hearing (DHH) adolescents' recognition of category names in an innovative task of semantic categorization. In each trial, the category-name target appeared briefly at the screen center followed by two words or two pictures for two basic-level exemplars of high or middle typicality, which appeared briefly approximately where the target had appeared. Participants' reaction times when they were deciding whether the target referred to living or nonliving things consistently revealed the typicality effect for the word, but a reversed-typicality effect for picture-presented exemplars...
2015: American Annals of the Deaf
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"