keyword
https://read.qxmd.com/read/35009612/lipreading-architecture-based-on-multiple-convolutional-neural-networks-for-sentence-level-visual-speech-recognition
#21
JOURNAL ARTICLE
Sanghun Jeon, Ahmed Elsharkawy, Mun Sang Kim
In visual speech recognition (VSR), speech is transcribed using only visual information to interpret tongue and teeth movements. Recently, deep learning has shown outstanding performance in VSR, with accuracy exceeding that of lipreaders on benchmark datasets. However, several problems still exist when using VSR systems. A major challenge is the distinction of words with similar pronunciation, called homophones; these lead to word ambiguity. Another technical limitation of traditional VSR systems is that visual information does not provide sufficient data for learning words such as "a", "an", "eight", and "bin" because their lengths are shorter than 0...
December 23, 2021: Sensors
https://read.qxmd.com/read/34999410/looking-for-low-vision-predicting-visual-prognosis-by-fusing-structured-and-free-text-data-from-electronic-health-records
#22
JOURNAL ARTICLE
Haiwen Gui, Benjamin Tseng, Wendeng Hu, Sophia Y Wang
INTRODUCTION: Low vision rehabilitation improves quality-of-life for visually impaired patients, but referral rates fall short of national guidelines. Automatically identifying, from electronic health records (EHR), patients with poor visual prognosis could allow targeted referrals to low vision services. The purpose of this study was to build and evaluate deep learning models that integrate EHR data that is both structured and free-text to predict visual prognosis. METHODS: We identified 5547 patients with low vision (defined as best documented visual acuity (VA) less than 20/40) on ≥ 1 encounter from EHR from 2009 to 2018, with ≥ 1 year of follow-up from the earliest date of low vision, who did not improve to greater than 20/40 over 1 year...
December 30, 2021: International Journal of Medical Informatics
https://read.qxmd.com/read/34933038/visual-event-related-potentials-reveal-the-early-lexical-processing-of-chinese-characters
#23
JOURNAL ARTICLE
Ruifeng Yu, Jingyu Chen, Yang Peng, Feng Gu
Logographic scripts such as Chinese differ markedly from alphabetic scripts. The time-course of the lexical processing of alphabetic words was widely studied by recording event-related potentials (ERPs), and the results indicated that alphabetic words are rapidly and automatically processed. This study investigated whether there is also rapid and automatic lexical processing of Chinese characters by recording ERPs. High-frequency (HF) characters and orthographically similar low-frequency (LF) characters were pseudo-randomly presented to proficient Chinese readers...
January 28, 2022: Neuropsychologia
https://read.qxmd.com/read/34770332/a-hybrid-speech-enhancement-algorithm-for-voice-assistance-application
#24
JOURNAL ARTICLE
Jenifa Gnanamanickam, Yuvaraj Natarajan, Sri Preethaa K R
In recent years, speech recognition technology has become a more common notion. Speech quality and intelligibility are critical for the convenience and accuracy of information transmission in speech recognition. The speech processing systems used to converse or store speech are usually designed for an environment without any background noise. However, in a real-world atmosphere, background intervention in the form of background noise and channel noise drastically reduces the performance of speech recognition systems, resulting in imprecise information transfer and exhausting the listener...
October 23, 2021: Sensors
https://read.qxmd.com/read/34744186/big-data-directed-acyclic-graph-model-for-real-time-covid-19-twitter-stream-detection
#25
JOURNAL ARTICLE
Bakhtiar Amen, Syahirul Faiz, Thanh-Toan Do
Every day, large-scale data are continuously generated on social media as streams, such as Twitter, which inform us about all events around the world in real-time. Notably, Twitter is one of the effective platforms to update countries leaders and scientists during the coronavirus (COVID-19) pandemic. Other people have also used this platform to post their concerns about the spread of this virus and a rapid increase of death cases globally. The aim of this work is to detect anomalous events associated with COVID-19 from Twitter...
March 2022: Pattern Recognition
https://read.qxmd.com/read/34721182/erp-correlates-of-altered-orthographic-phonological-processing-in-dyslexia
#26
JOURNAL ARTICLE
Vera Varga, Dénes Tóth, Kathleen Kay Amora, Dávid Czikora, Valéria Csépe
Automatic visual word recognition requires not only well-established phonological and orthographic representations but also efficient audio-visual integration of these representations. One possibility is that in developmental dyslexia, inefficient orthographic processing might underlie poor reading. Alternatively, reading deficit could be due to inefficient phonological processing or inefficient integration of orthographic and phonological information. In this event-related potential study, participants with dyslexia ( N = 25) and control readers ( N = 27) were presented with pairs of words and pseudowords in an implicit same-different task...
2021: Frontiers in Psychology
https://read.qxmd.com/read/34690855/computer-vision-system-for-expressing-texture-using-sound-symbolic-words
#27
JOURNAL ARTICLE
Koichi Yamagata, Jinhwan Kwon, Takuya Kawashima, Wataru Shimoda, Maki Sakamoto
The major goals of texture research in computer vision are to understand, model, and process texture and ultimately simulate human visual information processing using computer technologies. The field of computer vision has witnessed remarkable advancements in material recognition using deep convolutional neural networks (DCNNs), which have enabled various computer vision applications, such as self-driving cars, facial and gesture recognition, and automatic number plate recognition. However, for computer vision to "express" texture like human beings is still difficult because texture description has no correct or incorrect answer and is ambiguous...
2021: Frontiers in Psychology
https://read.qxmd.com/read/34430795/how-task-set-and-task-switching-modulate-perceptual-processes-is-recognition-of-facial-emotion-an-exception
#28
JOURNAL ARTICLE
Heike Elchlepp, Stephen Monsell, Aureliu Lavric
In Part 1 we review task-switching and other studies showing that, even with time for preparation, participants' ability to shift attention to a relevant attribute or object before the stimulus onset is limited: there is a 'residual cost'. In particular, several brain potential markers of perceptual encoding are delayed on task-switch trials, compared to task-repeat trials that require attention to the same attribute as before. Such effects have been documented even for a process often considered 'automatic' - visual word recognition: ERP markers of word frequency and word/nonword status are (1) delayed when the word recognition task follows a judgement of a perceptual property compared to repeating the lexical task, and (2) strongly attenuated during the perceptual judgements...
2021: Journal of Cognition
https://read.qxmd.com/read/34330269/study-on-structured-method-of-chinese-mri-report-of-nasopharyngeal-carcinoma
#29
JOURNAL ARTICLE
Xin Huang, Hui Chen, Jing-Dong Yan
BACKGROUND: Image text is an important text data in the medical field at it can assist clinicians in making a diagnosis. However, due to the diversity of languages, most descriptions in the image text are unstructured data. The same medical phenomenon may also be described in various ways, such that it remains challenging to conduct text structure analysis. The aim of this research is to develop a feasible approach that can automatically convert nasopharyngeal cancer reports into structured text and build a knowledge network...
July 30, 2021: BMC Medical Informatics and Decision Making
https://read.qxmd.com/read/33929963/speech-vision-an-end-to-end-deep-learning-based-dysarthric-automatic-speech-recognition-system
#30
JOURNAL ARTICLE
Seyed Reza Shahamiri
Dysarthria is a disorder that affects an individual's speech intelligibility due to the paralysis of muscles and organs involved in the articulation process. As the condition is often associated with physically debilitating disabilities, not only do such individuals face communication problems, but also interactions with digital devices can become a burden. For these individuals, automatic speech recognition (ASR) technologies can make a significant difference in their lives as computing and portable digital devices can become an interaction medium, enabling them to communicate with others and computers...
April 30, 2021: IEEE Transactions on Neural Systems and Rehabilitation Engineering
https://read.qxmd.com/read/33829839/as-time-goes-by-space-time-compatibility-effects-in-word-recognition
#31
JOURNAL ARTICLE
Camille L Grasso, Johannes C Ziegler, Jonathan Mirault, Jennifer T Coull, Marie Montant
The processing of time activates a spatial left-to-right mental timeline, where past events are "located" to the left and future events to the right. If past and future words activate this mental timeline, then the processing of such words should interfere with hand movements that go in the opposite direction. To test this hypothesis, we conducted 3 visual lexical decision tasks with conjugated (past/future) verbs and pseudoverbs. In Experiment 1, participants moved a pen to the right or left of a trackpad to indicate whether a visual stimulus was a real word or not...
April 8, 2021: Journal of Experimental Psychology. Learning, Memory, and Cognition
https://read.qxmd.com/read/33721262/can-rotated-words-be-processed-automatically-evidence-from-rotated-repetition-priming
#32
JOURNAL ARTICLE
András Benyhe, Péter Csibri
Visual word processing has its own dedicated neural system that, due to the novelty of this activity, is unlikely to have acquired its specialization through natural selection. Understanding the properties of this system could shed light on its recruitment and the background of its disorders. Although recognition of simple visual objects is orientation invariant, this is not necessarily the case for written words. We used a masked repetition priming paradigm to find out whether words retain their readability when viewed in atypical orientations...
March 15, 2021: Memory & Cognition
https://read.qxmd.com/read/33634233/morpheme-position-coding-in-reading-development-as-explored-with-a-letter-search-task
#33
JOURNAL ARTICLE
Jana Hasenäcker, Maria Ktori, Davide Crepaldi
Suffixes have been shown to be recognized as units of processing in visual word recognition and their identification has been argued to be position-specific in skilled adult readers: in lexical decision tasks suffixes are automatically identified at word endings, but not at word beginnings. The present study set out to investigate whether position-specific coding can be detected with a letter search task and whether children already code suffixes as position-specific units. A preregistered experiment was conducted in Italian in which 3rd-graders, 5th-graders, and adults had to detect a target letter that was either contained in the suffix of a pseudoword (e...
February 17, 2021: Journal of Cognition
https://read.qxmd.com/read/33347053/non-auditory-functions-in-low-performing-adult-cochlear-implant-users
#34
JOURNAL ARTICLE
Christiane Völter, Kirsten Oberländer, Rebecca Carroll, Stefan Dazert, Benjamin Lentz, Rainer Martin, Jan Peter Thomas
INTRODUCTION: Despite substantial benefits of cochlear implantation (CI) there is a high variability in speech recognition, the reasons for which are not fully understood. Especially the group of low-performing CI users is under-researched. Because of limited perceptual quality, top-down mechanisms play an important role in decoding the speech signal transmitted by the CI. Thereby, differences in cognitive functioning and linguistic skills may explain speech outcome in these CI subjects...
December 17, 2020: Otology & Neurotology
https://read.qxmd.com/read/32712818/activation-time-course-of-phonological-code-in-silent-word-recognition-in-adult-readers-with-and-without-dyslexia
#35
JOURNAL ARTICLE
Ambre Denis-Noël, Chotiga Pattamadilok, Éric Castet, Pascale Colé
In skilled adult readers, reading words is generally assumed to rapidly and automatically activate the phonological code. In adults with dyslexia, despite the main consensus on their phonological processing deficits, little is known about the activation time course of this code. The present study investigated this issue in both populations. Participants' accuracy and eye movements were recorded while they performed a visual lexical decision task in which phonological consistency of written words was manipulated...
July 25, 2020: Annals of Dyslexia
https://read.qxmd.com/read/32494908/orthographic-and-phonological-contributions-to-flanker-effects
#36
JOURNAL ARTICLE
Christophe Cauchi, Bernard Lété, Jonathan Grainger
Does phonology contribute to effects of orthographically related flankers in the flankers task? In order to answer this question, we implemented the flanker equivalent of a pseudohomophone priming manipulation that has been widely used to demonstrate automatic phonological processing during visual word recognition. In Experiment 1, central target words were flanked on each side by either a pseudohomophone of the target (e.g., roze rose roze), an orthographic control pseudoword (rone rose rone), or an unrelated pseudoword (mirt rose mirt)...
June 3, 2020: Attention, Perception & Psychophysics
https://read.qxmd.com/read/32283816/the-combination-of-adaptive-convolutional-neural-network-and-bag-of-visual-words-in-automatic-diagnosis-of-third-molar-complications-on-dental-x-ray-images
#37
JOURNAL ARTICLE
Vo Truong Nhu Ngoc, Agwu Chinedu Agwu, Le Hoang Son, Tran Manh Tuan, Cu Nguyen Giap, Mai Thi Giang Thanh, Hoang Bao Duy, Tran Thi Ngan
In dental diagnosis, recognizing tooth complications quickly from radiology (e.g., X-rays) takes highly experienced medical professionals. By using object detection models and algorithms, this work is much easier and needs less experienced medical practitioners to clear their doubts while diagnosing a medical case. In this paper, we propose a dental defect recognition model by the integration of Adaptive Convolution Neural Network and Bag of Visual Word (BoVW). In this model, BoVW is used to save the features extracted from images...
April 9, 2020: Diagnostics
https://read.qxmd.com/read/32234513/word-processing-deficits-in-children-with-isolated-and-combined-reading-and-spelling-deficits-an-erp-study
#38
JOURNAL ARTICLE
Heike Mehlhase, Sarolta Bakos, Jürgen Bartling, Gerd Schulte-Körne, Kristina Moll
Dissociations between reading and spelling deficits are likely to be associated with distinct deficits in orthographic word processing. To specify differences in automatic visual word recognition, the current ERP-study compared children with isolated reading fluency deficits (iRD), isolated spelling deficits (iSD), and combined reading fluency and spelling deficits (cRSD) as well as typically developing (TD) 10-year-olds while performing a variant of the Reicher-Wheeler paradigm: children had to indicate which of two letters occurred at a given position in a previously presented word, legal pseudoword, illegal pseudoword or nonword...
July 1, 2020: Brain Research
https://read.qxmd.com/read/32103890/children-with-dyslexia-have-altered-cross-modal-processing-linked-to-binocular-fusion-a-pilot-study
#39
JOURNAL ARTICLE
Patrick Quercia, Thierry Pozzo, Alfredo Marino, Anne Laure Guillemant, Céline Cappe, Nicolas Gueugneau
Introduction: The cause of dyslexia, a reading disability characterized by difficulties with accurate and/or fluent word recognition and by poor spelling and decoding abilities, is unknown. A considerable body of evidence shows that dyslexics have phonological disorders. Other studies support a theory of altered cross-modal processing with the existence of a pan-sensory temporal processing deficit associated with dyslexia. Learning to read ultimately relies on the formation of automatic multisensory representations of sounds and their written representation while eyes fix a word or move along a text...
2020: Clinical Ophthalmology
https://read.qxmd.com/read/31947458/creating-visual-vocabularies-for-the-retrieval-and-classification-of-histopathology-images
#40
JOURNAL ARTICLE
Athanasios Kallipolitis, Ilias Maglogiannis
State-of-the-art technologies in the fields of computer vision and machine learning led the automatic recognition of malignant structures in histopathology images. More than often, such structures are reported to be found in glands, where different morphological characteristics indicate the existence of a variety of adenocarcinomas, including prostate, breast, lung and colon cancer. Classification of images containing glandular representations in different cancer types can be performed in the whole image by the utilization of a combination of local and global features...
July 2019: Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society
keyword
keyword
87479
2
3
Fetch more papers »
Fetching more papers... Fetching...
Remove bar
Read by QxMD icon Read
×

Save your favorite articles in one place with a free QxMD account.

×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"

We want to hear from doctors like you!

Take a second to answer a survey question.