keyword
MENU ▼
Read by QxMD icon Read
search

Automatic visual word recognition

keyword
https://www.readbyqxmd.com/read/28987908/early-use-of-phonological-codes-in-deaf-readers-an-erp-study
#1
Eva Gutierrez-Sigut, Marta Vergara-Martínez, Manuel Perea
Previous studies suggest that deaf readers use phonological information of words when it is explicitly demanded by the task itself. However, whether phonological encoding is automatic remains controversial. The present experiment examined whether adult congenitally deaf readers show evidence of automatic use of phonological information during visual word recognition. In an ERP masked priming lexical decision experiment, deaf participants responded to target words preceded by a pseudohomophone (koral - CORAL) or an orthographic control prime (toral - CORAL)...
October 5, 2017: Neuropsychologia
https://www.readbyqxmd.com/read/28956945/morphological-effects-in-visual-word-recognition-children-adolescents-and-adults
#2
Nicola Dawson, Kathleen Rastle, Jessie Ricketts
The process by which morphologically complex words are recognized and stored is a matter of ongoing debate. A large body of evidence indicates that complex words are automatically decomposed during visual word recognition in adult readers. Research with developing readers is limited and findings are mixed. This study aimed to investigate morphological decomposition in visual word recognition using cross-sectional data. Participants (33 adults, 36 older adolescents [16 to 17 years], 37 younger adolescents [12 to 13 years], and 50 children [7 to 9 years]) completed a timed lexical-decision task comprising 120 items (60 nonwords and 60 real word fillers)...
September 28, 2017: Journal of Experimental Psychology. Learning, Memory, and Cognition
https://www.readbyqxmd.com/read/28865283/the-supramarginal-and-angular-gyri-underlie-orthographic-competence-in-spanish-language
#3
Andrés Antonio González-Garrido, Fernando Alejandro Barrios, Fabiola Reveca Gómez-Velázquez, Daniel Zarabozo-Hurtado
Orthographic competence allows automatic word recognition and reading fluency. To elucidate how the orthographic competence in Spanish-speaking adults might affect the neurofunctional mechanisms of visual word recognition, 32 young adults equally divided in two groups (HSS: High Spelling Skills, and LSS: Low Spelling Skills) were evaluated using fMRI methods, while they performed an orthographic recognition task involving pseudohomophones. HSS achieved significantly more correct responses and lower reaction times than LSS...
August 30, 2017: Brain and Language
https://www.readbyqxmd.com/read/28840361/visual-word-recognition-and-vowelization-in-arabic-new-evidence-from-lexical-decision-task-performances
#4
Haitham Taha, Hanan Azaizah-Seh
The effect of vowelization signs on the process of visual word recognition in Arabic was investigated among 41 native Arab skilled readers with age average of 30.66 ± 9.09. The participants performed a lexical decision task using three types of words and pseudowords; full, partial and non-vowelized. The results showed that for both words and pseudowords, response times were shorter and accuracy levels were higher for the non-vowelized condition compared to the other conditions of vowelization. The results reinforce the argument that automatic lexical processes during word recognition in Arabic orthography might be disturbed by supplementary information such as vowelization...
November 2017: Cognitive Processing
https://www.readbyqxmd.com/read/28803218/do-you-hear-feather-when-listening-to-rain-lexical-tone-activation-during-unconscious-translation-evidence-from-mandarin-english-bilinguals
#5
Xin Wang, Juan Wang, Jeffrey G Malins
Although lexical tone is a highly prevalent phonetic cue in human languages, its role in bilingual spoken word recognition is not well understood. The present study investigates whether and how adult bilinguals, who use pitch contours to disambiguate lexical items in one language but not the other, access a tonal L1 when exclusively processing a non-tonal L2. Using the visual world paradigm, we show that Mandarin-English listeners automatically activated Mandarin translation equivalents of English target words such as 'rain' (Mandarin 'yu3'), and consequently were distracted by competitors whose segments and tones overlapped with the translations of English target words ('feather', also 'yu3' in Mandarin)...
December 2017: Cognition
https://www.readbyqxmd.com/read/28749365/discriminative-joint-feature-topic-model-with-dual-constraints-for-wce-classification
#6
Yixuan Yuan, Xiwen Yao, Junwei Han, Lei Guo, Max Q-H Meng
Wireless capsule endoscopy (WCE) enables clinicians to examine the digestive tract without any surgical operations, at the cost of a large amount of images to be analyzed. The main challenge for automatic computer-aided diagnosis arises from the difficulty of robust characterization of these images. To tackle this problem, a novel discriminative joint-feature topic model (DJTM) with dual constraints is proposed to classify multiple abnormalities in WCE images. We first propose a joint-feature probabilistic latent semantic analysis (PLSA) model, where color and texture descriptors extracted from same image patches are jointly modeled with their conditional distributions...
July 25, 2017: IEEE Transactions on Cybernetics
https://www.readbyqxmd.com/read/28641121/is-inhibitory-control-involved-in-discriminating-pseudowords-that-contain-the-reversible-letters-b-and-d
#7
Lorie-Marlène Brault Foisy, Emmanuel Ahr, Steve Masson, Olivier Houdé, Grégoire Borst
Children tend to confuse reversible letters such as b and d when they start learning to read. According to some authors, mirror errors are a consequence of the mirror generalization (MG) process that allows one to recognize objects independently of their left-right orientation. Although MG is advantageous for the visual recognition of objects, it is detrimental for the visual recognition of reversible letters. Previous studies comparing novice and expert readers demonstrated that MG must be inhibited to discriminate reversible single letters...
June 19, 2017: Journal of Experimental Child Psychology
https://www.readbyqxmd.com/read/28576569/top-down-modulation-of-early-print-tuned-neural-activity-in-reading
#8
Fang Wang, Urs Maurer
Fast neural tuning to print has been found within the first 250ms of stimulus processing across different writing systems, indicated by larger N1 negativity in the ERP to words (or characters) compared to control stimuli, such as symbols. However, whether print tuning effects can be modulated by task demands at early stages of visual word recognition is still under debate. To further explore this issue, an ERP study in Chinese was conducted. Familiar, high-frequency, left/right-structured Chinese characters and unfamiliar, stroke number-matched symbols (Korean characters) were used as stimulus conditions...
May 30, 2017: Neuropsychologia
https://www.readbyqxmd.com/read/28190930/effective-beginning-handwriting-instruction-multi-modal-consistent-format-for-2-years-and-linked-to-spelling-and-composing
#9
Beverly Wolf, Robert D Abbott, Virginia W Berninger
In Study 1, the treatment group (N = 33 first graders, M = 6 years 10 months, 16 girls) received Slingerland multi-modal (auditory, visual, tactile, motor through hand, and motor through mouth) manuscript (unjoined) handwriting instruction embedded in systematic spelling, reading, and composing lessons; and the control group (N =16 first graders, M = 7 years 1 month, 7 girls) received manuscript handwriting instruction not systematically related to the other literacy activities. ANOVA showed both groups improved on automatic alphabet writing from memory; but ANCOVA with the automatic alphabet writing task as covariate showed that the treatment group improved significantly more than control group from the second to ninth month of first grade on dictated spelling and recognition of word-specific spellings among phonological foils...
February 2017: Reading and Writing
https://www.readbyqxmd.com/read/28163139/automaticity-of-phonological-and-semantic-processing-during-visual-word-recognition
#10
Chotiga Pattamadilok, Valérie Chanoine, Christophe Pallier, Jean-Luc Anton, Bruno Nazarian, Pascal Belin, Johannes C Ziegler
Reading involves activation of phonological and semantic knowledge. Yet, the automaticity of the activation of these representations remains subject to debate. The present study addressed this issue by examining how different brain areas involved in language processing responded to a manipulation of bottom-up (level of visibility) and top-down information (task demands) applied to written words. The analyses showed that the same brain areas were activated in response to written words whether the task was symbol detection, rime detection, or semantic judgment...
February 3, 2017: NeuroImage
https://www.readbyqxmd.com/read/28113407/automatic-estimation-of-multidimensional-ratings-from-a-single-sound-symbolic-word-and-word-based-visualization-of-tactile-perceptual-space
#11
Ryuichi Doizaki, Junji Watanabe, Maki Sakamoto
Several pairs of Japanese adjective words pertaining to material's properties, such as roughness and hardness, have been used in Japanese studies to quantitatively evaluate variations in tactile sensations. This method asks observers to analyze their perceptual experiences one by one. An alternative notion is that human perceptual recognition is performed as a whole rather than by using fragmented factors. Based on this notion, we propose a system that can automatically estimate multidimensional ratings of touch from a single sound-symbolic word that has been spontaneously and intuitively expressed by a user...
April 2017: IEEE Transactions on Haptics
https://www.readbyqxmd.com/read/27905070/visual-speech-influences-speech-perception-immediately-but-not-automatically
#12
Holger Mitterer, Eva Reinisch
Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker...
February 2017: Attention, Perception & Psychophysics
https://www.readbyqxmd.com/read/27835754/automatic-phonological-activation-during-visual-word-recognition-in-bilingual-children-a-cross-language-masked-priming-study-in-grades-3-and-5
#13
Karinne Sauval, Laetitia Perre, Lynne G Duncan, Eva Marinus, Séverine Casalis
Previous masked priming research has shown automatic phonological activation during visual word recognition in monolingual skilled adult readers. Activation also occurs across languages in bilingual adult readers, suggesting that the activation of phonological representations is not language specific. Less is known about developing readers. First, it is unclear whether there is automatic phonological activation during visual word recognition among children in general. Second, no empirical data exist on whether the activation of phonological representations is language specific or not in bilingual children...
February 2017: Journal of Experimental Child Psychology
https://www.readbyqxmd.com/read/27775536/automatic-estimation-of-multidimensional-ratings-from-a-single-sound-symbolic-word-and-word-based-visualization-of-tactile-perceptual-space
#14
Ryuichi Doizaki, Junji Watanabe, Maki Sakamoto
Several pairs of Japanese adjective words pertaining to a material's properties, such as roughness and hardness, have been used in Japanese studies to quantitatively evaluate variations in tactile sensations. This method asks observers to analyze their perceptual experiences one by one. An alternative notion is that human perceptual recognition is performed as a whole rather than by using fragmented factors. Based on this notion, we propose a system that can automatically estimate multidimensional ratings of touch from a single sound-symbolic word that has been spontaneously and intuitively expressed by a user...
October 18, 2016: IEEE Transactions on Haptics
https://www.readbyqxmd.com/read/27562102/eye-tracking-the-time-course-of-novel-word-learning-and-lexical-competition-in-adults-and-children
#15
A R Weighall, L M Henderson, D J Barr, S A Cairney, M G Gaskell
Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method - the visual world paradigm - consistently show competition without a delay. We trained 42 adults and 40 children (aged 7-8) on novel word-object pairings, and employed this paradigm to measure the time-course of lexical competition. Fixations to novel objects upon hearing existing words (e...
April 2017: Brain and Language
https://www.readbyqxmd.com/read/27070607/building-an-enhanced-vocabulary-of-the-robot-environment-with-a-ceiling-pointing-camera
#16
Alejandro Rituerto, Henrik Andreasson, Ana C Murillo, Achim Lilienthal, José Jesús Guerrero
Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW) or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words...
2016: Sensors
https://www.readbyqxmd.com/read/26641447/is-virtu4l-larger-than-vir7ual-automatic-processing-of-number-quantity-and-lexical-representations-in-leet-words
#17
Javier García-Orza, Montserrat Comesaña, Ana Piñeiro, Ana Paula Soares, Manuel Perea
Recent research has shown that leet words (i.e., words in which some of the letters are replaced by visually similar digits; e.g., VIRTU4L) can be processed as their base words without much cost. However, it remains unclear whether the digits inserted in leet words are simply processed as letters or whether they are simultaneously processed as numbers (i.e., in terms of access to their quantity representation). To address this question, we conducted 2 experiments that examined the size congruity effect (i.e...
June 2016: Journal of Experimental Psychology. Learning, Memory, and Cognition
https://www.readbyqxmd.com/read/26427062/fingerspelling-as-a-novel-gateway-into-reading-fluency-in-deaf-bilinguals
#18
Adam Stone, Geo Kartheiser, Peter C Hauser, Laura-Ann Petitto, Thomas E Allen
Studies have shown that American Sign Language (ASL) fluency has a positive impact on deaf individuals' English reading, but the cognitive and cross-linguistic mechanisms permitting the mapping of a visual-manual language onto a sound-based language have yet to be elucidated. Fingerspelling, which represents English orthography with 26 distinct hand configurations, is an integral part of ASL and has been suggested to provide deaf bilinguals with important cross-linguistic links between sign language and orthography...
2015: PloS One
https://www.readbyqxmd.com/read/26386547/mid-level-image-representations-for-real-time-heart-view-plane-classification-of-echocardiograms
#19
REVIEW
Otávio A B Penatti, Rafael de O Werneck, Waldir R de Almeida, Bernardo V Stein, Daniel V Pazinato, Pedro R Mendes Júnior, Ricardo da S Torres, Anderson Rocha
In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes...
November 1, 2015: Computers in Biology and Medicine
https://www.readbyqxmd.com/read/26352449/actions-in-the-eye-dynamic-gaze-datasets-and-learnt-saliency-models-for-visual-recognition
#20
Stefan Mathe, Cristian Sminchisescu
Systems based on bag-of-words models from image features collected at maxima of sparse interest point operators have been used successfully for both computer visual object and action recognition tasks. While the sparse, interest-point based approach to recognition is not inconsistent with visual processing in biological systems that operate in `saccade and fixate' regimes, the methodology and emphasis in the human and the computer vision communities remains sharply distinct. Here, we make three contributions aiming to bridge this gap...
July 2015: IEEE Transactions on Pattern Analysis and Machine Intelligence
keyword
keyword
87479
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"