Read by QxMD icon Read

convolutional neural network

Yong Zhang, Meng Joo Er, Rui Zhao, Mahardhika Pratama
Multidocument summarization has gained popularity in many real world applications because vital information can be extracted within a short time. Extractive summarization aims to generate a summary of a document or a set of documents by ranking sentences and the ranking results rely heavily on the quality of sentence features. However, almost all previous algorithms require hand-crafted features for sentence representation. In this paper, we leverage on word embedding to represent sentences so as to avoid the intensive labor in feature engineering...
November 28, 2016: IEEE Transactions on Cybernetics
Nian Liu, Junwei Han, Tianming Liu, Xuelong Li
Eye movements in the case of freely viewing natural scenes are believed to be guided by local contrast, global contrast, and top-down visual factors. Although a lot of previous works have explored these three saliency cues for several years, there still exists much room for improvement on how to model them and integrate them effectively. This paper proposes a novel computation model to predict eye fixations, which adopts a multiresolution convolutional neural network (Mr-CNN) to infer these three types of saliency cues from raw image data simultaneously...
November 29, 2016: IEEE Transactions on Neural Networks and Learning Systems
Zhenzhen Hu, Yonggang Wen, Jianfeng Wang, Meng Wang, Richang Hong, Shuicheng Yan
Age estimation based on the human face remains a significant problem in computer vision and pattern recognition. In order to estimate an accurate age or age group of a facial image, most of the existing algorithms require a huge face data set attached with age labels. This imposes a constraint on the utilization of the immensely unlabeled or weakly labeled training data, e.g. the huge amount of human photos in the social networks. These images may provide no age label, but it is easily to derive the age difference for an image pair of the same person...
December 1, 2016: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
Ravi K Samala, Heang-Ping Chan, Lubomir Hadjiiski, Mark A Helvie, Jun Wei, Kenny Cha
PURPOSE: Develop a computer-aided detection (CAD) system for masses in digital breast tomosynthesis (DBT) volume using a deep convolutional neural network (DCNN) with transfer learning from mammograms. METHODS: A data set containing 2282 digitized film and digital mammograms and 324 DBT volumes were collected with IRB approval. The mass of interest on the images was marked by an experienced breast radiologist as reference standard. The data set was partitioned into a training set (2282 mammograms with 2461 masses and 230 DBT views with 228 masses) and an independent test set (94 DBT views with 89 masses)...
December 2016: Medical Physics
Yousef Rezaei Tabar, Ugur Halici
OBJECTIVE: Signal classification is an important issue in brain computer interface (BCI) systems. Deep learning approaches have been used successfully in many recent studies to learn features and classify different types of data. However, the number of studies that employ these approaches on BCI applications is very limited. In this study we aim to use deep learning methods to improve classification performance of EEG motor imagery signals. APPROACH: In this study we investigate convolutional neural networks (CNN) and stacked autoencoders (SAE) to classify EEG Motor Imagery signals...
November 30, 2016: Journal of Neural Engineering
Varun Gulshan, Lily Peng, Marc Coram, Martin C Stumpe, Derek Wu, Arunachalam Narayanaswamy, Subhashini Venugopalan, Kasumi Widner, Tom Madams, Jorge Cuadros, Ramasamy Kim, Rajiv Raman, Philip C Nelson, Jessica L Mega, Dale R Webster
Importance: Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. Objective: To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs...
November 29, 2016: JAMA: the Journal of the American Medical Association
Ke Sun, Zhengjie Wang, Kang Tu, Shaojin Wang, Leiqing Pan
To investigate the potential of conventional and deep learning techniques to recognize the species and distribution of mould in unhulled paddy, samples were inoculated and cultivated with five species of mould, and sample images were captured. The mould recognition methods were built using support vector machine (SVM), back-propagation neural network (BPNN), convolutional neural network (CNN), and deep belief network (DBN) models. An accuracy rate of 100% was achieved by using the DBN model to identify the mould species in the sample images based on selected colour-histogram parameters, followed by the SVM and BPNN models...
November 29, 2016: Scientific Reports
Jack Lanchantin, Ritambhara Singh, Beilun Wang, Yanjun Qi
Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification...
2016: Pacific Symposium on Biocomputing
Phillip M Cheng, Harshawn S Malhi
The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set...
November 28, 2016: Journal of Digital Imaging: the Official Journal of the Society for Computer Applications in Radiology
Karim Lekadir, Alfiia Galimzianova, Angels Betriu, Maria Del Mar Vila, Laura Igual, Daniel Rubin, Elvira Fernandez, Petia Radeva, Sandy Napel
Characterization of carotid plaque composition, more specifically the amount of lipid core, fibrous tissue, and calcified tissue, is an important task for the identification of plaques that are prone to rupture, and thus for early risk estima-tion of cardiovascular and cerebrovascular events. Due to its low costs and wide availability, carotid ultrasound has the potential to become the modality of choice for plaque characterization in clinical practice. However, its significant image noise, coupled with the small size of the plaques and their complex appearance, makes it difficult for automated techniques to discriminate be-tween the different plaque constituents...
November 22, 2016: IEEE Journal of Biomedical and Health Informatics
Yuma Miki, Chisako Muramatsu, Tatsuro Hayashi, Xiangrong Zhou, Takeshi Hara, Akitoshi Katsumata, Hiroshi Fujita
Dental records play an important role in forensic identification. To this end, postmortem dental findings and teeth conditions are recorded in a dental chart and compared with those of antemortem records. However, most dentists are inexperienced at recording the dental chart for corpses, and it is a physically and mentally laborious task, especially in large scale disasters. Our goal is to automate the dental filing process by using dental x-ray images. In this study, we investigated the application of a deep convolutional neural network (DCNN) for classifying tooth types on dental cone-beam computed tomography (CT) images...
November 12, 2016: Computers in Biology and Medicine
Xiaohong W Gao, Rui Hui, Zengmin Tian
While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease...
January 2017: Computer Methods and Programs in Biomedicine
Jinhee Park, Rios Jesus Javier, Taesup Moon, Youngwook Kim
Accurate classification of human aquatic activities using radar has a variety of potential applications such as rescue operations and border patrols. Nevertheless, the classification of activities on water using radar has not been extensively studied, unlike the case on dry ground, due to its unique challenge. Namely, not only is the radar cross section of a human on water small, but the micro-Doppler signatures are much noisier due to water drops and waves. In this paper, we first investigate whether discriminative signatures could be obtained for activities on water through a simulation study...
November 24, 2016: Sensors
Peijun Hu, Fa Wu, Jialin Peng, Yuanyuan Bao, Feng Chen, Dexing Kong
PURPOSE: Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. METHODS: The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method...
November 24, 2016: International Journal of Computer Assisted Radiology and Surgery
Peijun Hu, Fa Wu, Jialin Peng, Ping Liang, Dexing Kong
The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution...
December 21, 2016: Physics in Medicine and Biology
Jun Haeng Lee, Tobi Delbruck, Michael Pfeiffer
Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials...
2016: Frontiers in Neuroscience
Sheng Guo, Weilin Huang, Limin Wang, Yu Qiao
Convolutional neural networks (CNN) have recently achieved remarkable successes in various image classification and understanding tasks. The deep features obtained at the top fully-connected layer of the CNN (FC-features) exhibit rich global semantic information and are extremely effective in image classification. On the other hand, the convolutional features in the middle layers of the CNN also contain meaningful local information, but are not fully explored for image representation. In this paper, we propose a novel Locally-Supervised Deep Hybrid Model (LS-DHM) that effectively enhances and explores the convolutional features for scene recognition...
November 16, 2016: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
Sihong Chen, Jing Qin, Xing Ji, Baiying Lei, Tianfu Wang, Dong Ni, Jie-Zhi Cheng
The gap between the computational and semantic features is the one of major factors that bottlenecks the computer-aided diagnosis (CAD) performance from clinical usage. To bridge this gap, we exploit three multi-task learning (MTL) schemes to leverage heterogeneous computational features derived from deep learning models of stacked denoising autoencoder (SDAE) and convolutional neural network (CNN), as well as hand-crafted Haar-like and HoG features, for the description of 9 semantic features for lung nodules in CT images...
November 16, 2016: IEEE Transactions on Medical Imaging
Hadi Rezaeilouyeh, Ali Mollahosseini, Mohammad H Mahoor
Cancer is the second leading cause of death in US after cardiovascular disease. Image-based computer-aided diagnosis can assist physicians to efficiently diagnose cancers in early stages. Existing computer-aided algorithms use hand-crafted features such as wavelet coefficients, co-occurrence matrix features, and recently, histogram of shearlet coefficients for classification of cancerous tissues and cells in images. These hand-crafted features often lack generalizability since every cancerous tissue and cell has a specific texture, structure, and shape...
October 2016: Journal of Medical Imaging
Satoru Hiwa, Kenya Hanawa, Ryota Tamura, Keisuke Hachisuka, Tomoyuki Hiroyasu
Functional near-infrared spectroscopy (fNIRS) is suitable for noninvasive mapping of relative changes in regional cortical activity but is limited for quantitative comparisons among cortical sites, subjects, and populations. We have developed a convolutional neural network (CNN) analysis method that learns feature vectors for accurate identification of group differences in fNIRS responses. In this study, subject gender was classified using CNN analysis of fNIRS data. fNIRS data were acquired from male and female subjects during a visual number memory task performed in a white noise environment because previous studies had revealed that the pattern of cortical blood flow during the task differed between males and females...
2016: Computational Intelligence and Neuroscience
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"