keyword
MENU ▼
Read by QxMD icon Read
search

Automatic visual word recognition

keyword
https://www.readbyqxmd.com/read/27905070/visual-speech-influences-speech-perception-immediately-but-not-automatically
#1
Holger Mitterer, Eva Reinisch
Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker...
November 30, 2016: Attention, Perception & Psychophysics
https://www.readbyqxmd.com/read/27835754/automatic-phonological-activation-during-visual-word-recognition-in-bilingual-children-a-cross-language-masked-priming-study-in-grades-3-and-5
#2
Karinne Sauval, Laetitia Perre, Lynne G Duncan, Eva Marinus, Séverine Casalis
Previous masked priming research has shown automatic phonological activation during visual word recognition in monolingual skilled adult readers. Activation also occurs across languages in bilingual adult readers, suggesting that the activation of phonological representations is not language specific. Less is known about developing readers. First, it is unclear whether there is automatic phonological activation during visual word recognition among children in general. Second, no empirical data exist on whether the activation of phonological representations is language specific or not in bilingual children...
November 8, 2016: Journal of Experimental Child Psychology
https://www.readbyqxmd.com/read/27775536/automatic-estimation-of-multidimensional-ratings-from-a-single-sound-symbolic-word-and-word-based-visualization-of-tactile-perceptual-space
#3
Ryuichi Doizaki, Junji Watanabe, Maki Sakamoto
Several pairs of Japanese adjective words pertaining to a material's properties, such as roughness and hardness, have been used in Japanese studies to quantitatively evaluate variations in tactile sensations. This method asks observers to analyze their perceptual experiences one by one. An alternative notion is that human perceptual recognition is performed as a whole rather than by using fragmented factors. Based on this notion, we propose a system that can automatically estimate multidimensional ratings of touch from a single sound-symbolic word that has been spontaneously and intuitively expressed by a user...
October 18, 2016: IEEE Transactions on Haptics
https://www.readbyqxmd.com/read/27562102/eye-tracking-the-time-course-of-novel-word-learning-and-lexical-competition-in-adults-and-children
#4
A R Weighall, L M Henderson, D J Barr, S A Cairney, M G Gaskell
Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method - the visual world paradigm - consistently show competition without a delay. We trained 42 adults and 40 children (aged 7-8) on novel word-object pairings, and employed this paradigm to measure the time-course of lexical competition. Fixations to novel objects upon hearing existing words (e...
August 22, 2016: Brain and Language
https://www.readbyqxmd.com/read/27070607/building-an-enhanced-vocabulary-of-the-robot-environment-with-a-ceiling-pointing-camera
#5
Alejandro Rituerto, Henrik Andreasson, Ana C Murillo, Achim Lilienthal, José Jesús Guerrero
Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW) or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words...
2016: Sensors
https://www.readbyqxmd.com/read/26641447/is-virtu4l-larger-than-vir7ual-automatic-processing-of-number-quantity-and-lexical-representations-in-leet-words
#6
Javier García-Orza, Montserrat Comesaña, Ana Piñeiro, Ana Paula Soares, Manuel Perea
Recent research has shown that leet words (i.e., words in which some of the letters are replaced by visually similar digits; e.g., VIRTU4L) can be processed as their base words without much cost. However, it remains unclear whether the digits inserted in leet words are simply processed as letters or whether they are simultaneously processed as numbers (i.e., in terms of access to their quantity representation). To address this question, we conducted 2 experiments that examined the size congruity effect (i.e...
June 2016: Journal of Experimental Psychology. Learning, Memory, and Cognition
https://www.readbyqxmd.com/read/26427062/fingerspelling-as-a-novel-gateway-into-reading-fluency-in-deaf-bilinguals
#7
Adam Stone, Geo Kartheiser, Peter C Hauser, Laura-Ann Petitto, Thomas E Allen
Studies have shown that American Sign Language (ASL) fluency has a positive impact on deaf individuals' English reading, but the cognitive and cross-linguistic mechanisms permitting the mapping of a visual-manual language onto a sound-based language have yet to be elucidated. Fingerspelling, which represents English orthography with 26 distinct hand configurations, is an integral part of ASL and has been suggested to provide deaf bilinguals with important cross-linguistic links between sign language and orthography...
2015: PloS One
https://www.readbyqxmd.com/read/26386547/mid-level-image-representations-for-real-time-heart-view-plane-classification-of-echocardiograms
#8
REVIEW
Otávio A B Penatti, Rafael de O Werneck, Waldir R de Almeida, Bernardo V Stein, Daniel V Pazinato, Pedro R Mendes Júnior, Ricardo da S Torres, Anderson Rocha
In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes...
November 1, 2015: Computers in Biology and Medicine
https://www.readbyqxmd.com/read/26352449/actions-in-the-eye-dynamic-gaze-datasets-and-learnt-saliency-models-for-visual-recognition
#9
Stefan Mathe, Cristian Sminchisescu
Systems based on bag-of-words models from image features collected at maxima of sparse interest point operators have been used successfully for both computer visual object and action recognition tasks. While the sparse, interest-point based approach to recognition is not inconsistent with visual processing in biological systems that operate in `saccade and fixate' regimes, the methodology and emphasis in the human and the computer vision communities remains sharply distinct. Here, we make three contributions aiming to bridge this gap...
July 2015: IEEE Transactions on Pattern Analysis and Machine Intelligence
https://www.readbyqxmd.com/read/26332785/electrostimulation-mapping-of-comprehension-of-auditory-and-visual-words
#10
Franck-Emmanuel Roux, Krasimir Miskin, Jean-Baptiste Durand, Oumar Sacko, Emilie Réhault, Rositsa Tanova, Jean-François Démonet
In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks...
October 2015: Cortex; a Journal Devoted to the Study of the Nervous System and Behavior
https://www.readbyqxmd.com/read/26326059/convolutional-neural-networks-in-the-brain-an-fmri-study
#11
Kandan Ramakrishnan, Steven Scholte, Victor Lamme, Arnold Smeulders, Sennay Ghebreab
Biologically inspired computational models replicate the hierarchical visual processing in the human ventral stream. One such recent model, Convolutional Neural Network (CNN) has achieved state of the art performance on automatic visual recognition tasks. The CNN architecture contains successive layers of convolution and pooling, and resembles the simple and complex cell hierarchy as proposed by Hubel and Wiesel. This makes it a candidate model to test against the human brain. In this study we look at 1) where in the brain different layers of the CNN account for brain responses, and 2) how the CNN network compares against existing and widely used hierarchical vision models such as Bag-of-Words (BoW) and HMAX...
2015: Journal of Vision
https://www.readbyqxmd.com/read/26235228/eyes-on-words-a-fixation-related-fmri-study-of-the-left-occipito-temporal-cortex-during-self-paced-silent-reading-of-words-and-pseudowords
#12
Sarah Schuster, Stefan Hawelka, Fabio Richlan, Philipp Ludersdorfer, Florian Hutzler
The predominant finding of studies assessing the response of the left ventral occipito-temporal cortex (vOT) to familiar words and to unfamiliar, but pronounceable letter strings (pseudowords) is higher activation for pseudowords. One explanation for this finding is that readers automatically generate predictions about a letter string's identity - pseudowords mismatch these predictions and the higher vOT activation is interpreted as reflecting the resultant prediction errors. The majority of studies, however, administered tasks which imposed demands above and beyond the intrinsic requirements of visual word recognition...
August 3, 2015: Scientific Reports
https://www.readbyqxmd.com/read/26225994/dietary-assessment-on-a-mobile-phone-using-image-processing-and-pattern-recognition-techniques-algorithm-design-and-system-prototyping
#13
Yasmine Probst, Duc Thanh Nguyen, Minh Khoi Tran, Wanqing Li
Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment...
August 2015: Nutrients
https://www.readbyqxmd.com/read/26069906/alzheimer-s-disease-diagnosis-on-structural-mr-images-using-circular-harmonic-functions-descriptors-on-hippocampus-and-posterior-cingulate-cortex
#14
Olfa Ben Ahmed, Maxim Mizotin, Jenny Benois-Pineau, Michèle Allard, Gwénaëlle Catheline, Chokri Ben Amar
Recently, several pattern recognition methods have been proposed to automatically discriminate between patients with and without Alzheimer's disease using different imaging modalities: sMRI, fMRI, PET and SPECT. Classical approaches in visual information retrieval have been successfully used for analysis of structural MRI brain images. In this paper, we use the visual indexing framework and pattern recognition analysis based on structural MRI data to discriminate three classes of subjects: normal controls (NC), mild cognitive impairment (MCI) and Alzheimer's disease (AD)...
September 2015: Computerized Medical Imaging and Graphics: the Official Journal of the Computerized Medical Imaging Society
https://www.readbyqxmd.com/read/26004975/a-reversed-typicality-effect-in-pictures-but-not-in-written-words-in-deaf-and-hard-of-hearing-adolescents
#15
Degao Li, Kejuan Gao, Xueyun Wu, Ying Xong, Xiaojun Chen, Weiwei He, Ling Li, Jingjia Huang
Two experiments investigated Chinese deaf and hard of hearing (DHH) adolescents' recognition of category names in an innovative task of semantic categorization. In each trial, the category-name target appeared briefly at the screen center followed by two words or two pictures for two basic-level exemplars of high or middle typicality, which appeared briefly approximately where the target had appeared. Participants' reaction times when they were deciding whether the target referred to living or nonliving things consistently revealed the typicality effect for the word, but a reversed-typicality effect for picture-presented exemplars...
2015: American Annals of the Deaf
https://www.readbyqxmd.com/read/25940105/the-visual-word-form-area-remains-in-the-dominant-hemisphere-for-language-in-late-onset-left-occipital-lobe-epilepsies-a-postsurgery-analysis-of-two-cases
#16
Ricardo Lopes, Rita Gouveia Nunes, Mário Rodrigues Simões, Mário Forjaz Secca, Alberto Leal
Automatic recognition of words from letter strings is a critical processing step in reading that is lateralized to the left-hemisphere middle fusiform gyrus in the so-called Visual Word Form Area (VWFA). Surgical lesions in this location can lead to irreversible alexia. Very early left hemispheric lesions can lead to transfer of the VWFA to the nondominant hemisphere, but it is currently unknown if this capability is preserved in epilepsies developing after reading acquisition. In this study, we aimed to determine the lateralization of the VWFA in late-onset left inferior occipital lobe epilepsies and also the effect of surgical disconnection from the adjacent secondary visual areas...
May 2015: Epilepsy & Behavior: E&B
https://www.readbyqxmd.com/read/25848683/early-visual-word-processing-is-flexible-evidence-from-spatiotemporal-brain-dynamics
#17
Yuanyuan Chen, Matthew H Davis, Friedemann Pulvermüller, Olaf Hauk
Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision...
September 2015: Journal of Cognitive Neuroscience
https://www.readbyqxmd.com/read/25847670/data-driven-spatio-temporal-rgbd-feature-encoding-for-action-recognition-in-operating-rooms
#18
Andru P Twinanda, Emre O Alkan, Afshin Gangi, Michel de Mathelin, Nicolas Padoy
PURPOSE: Context-aware systems for the operating room (OR) provide the possibility to significantly improve surgical workflow through various applications such as efficient OR scheduling, context-sensitive user interfaces, and automatic transcription of medical procedures. Being an essential element of such a system, surgical action recognition is thus an important research area. In this paper, we tackle the problem of classifying surgical actions from video clips that capture the activities taking place in the OR...
June 2015: International Journal of Computer Assisted Radiology and Surgery
https://www.readbyqxmd.com/read/25761003/seeing-the-same-words-differently-the-time-course-of-automaticity-and-top-down-intention-in-reading
#19
Kristof Strijkers, Daisy Bertrand, Jonathan Grainger
We investigated how linguistic intention affects the time course of visual word recognition by comparing the brain's electrophysiological response to a word's lexical frequency, a well-established psycholinguistic marker of lexical access, when participants actively retrieve the meaning of the written input (semantic categorization) versus a situation where no language processing is necessary (ink color categorization). In the semantic task, the ERPs elicited by high-frequency words started to diverge from those elicited by low-frequency words as early as 120 msec after stimulus onset...
August 2015: Journal of Cognitive Neuroscience
https://www.readbyqxmd.com/read/25713553/automaticity-revisited-when-print-doesn-t-activate-semantics
#20
Elsa M Labuschagne, Derek Besner
It is widely accepted that the presentation of a printed word "automatically" triggers processing that ends with full semantic activation. This processing, among other characteristics, is held to occur without intention, and cannot be stopped. The results of the present experiment show that this account is problematic in the context of a variant of the Stroop paradigm. Subjects named the print color of words that were either neutral or semantically related to color. When the letters were all colored, all spatially cued, and the spaces between letters were filled with characters from the top of the keyboard (i...
2015: Frontiers in Psychology
keyword
keyword
87479
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"