Read by QxMD icon Read

convolutional neural network

Yueying Kao, Ran He, Kaiqi Huang
Human beings often assess the aesthetic quality of an image coupled with the identification of the image's semantic content. This paper addresses the correlation issue between automatic aesthetic quality assessment and semantic recognition. We cast the assessment problem as the main task among a multitask deep model, and argue that semantic recognition task offers the key to address this problem. Based on convolutional neural networks, we employ a single and simple multi-task framework to efficiently utilize the supervision of aesthetic and semantic labels...
January 11, 2017: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
Lingqiao Liu, Peng Wang, Chunhua Shen, Lei Wang, Anton van den Hengel, Chao Wang, Heng Tao Shen
Deriving from the gradient vector of a generative model of local features, Fisher vector coding (FVC) has been identified as an effective coding method for image classification. Most, if not all, FVC implementations employ the Gaussian mixture model (GMM) as the generative model for local features. However, the representative power of a GMM can be limited because it essentially assumes that local features can be characterized by a fixed number of feature prototypes, and the number of prototypes is usually small in FVC...
January 10, 2017: IEEE Transactions on Pattern Analysis and Machine Intelligence
Dae Hoe Kim, Seong Tae Kim, Jung Min Chang, Yong Man Ro
Characterization of masses in computer-aided detection systems for digital breast tomosynthesis (DBT) is an important step to reduce false positive (FP) rates. To effectively differentiate masses from FPs in DBT, discriminative mass feature representation is required. In this paper, we propose a new latent feature representation boosted by depth directional long-term recurrent learning for characterizing malignant masses. The proposed network is designed to encode mass characteristics in two parts. First, 2D spatial image characteristics of DBT slices are encoded as a slice feature representation by convolutional neural network (CNN)...
February 7, 2017: Physics in Medicine and Biology
Rahul Paul, Samuel H Hawkins, Yoganand Balagurunathan, Matthew B Schabath, Robert J Gillies, Lawrence O Hall, Dmitry B Goldgof
Lung cancer is the most common cause of cancer-related deaths in the USA. It can be detected and diagnosed using computed tomography images. For an automated classifier, identifying predictive features from medical images is a key concern. Deep feature extraction using pretrained convolutional neural networks (CNNs) has recently been successfully applied in some image domains. Here, we applied a pretrained CNN to extract deep features from 40 computed tomography images, with contrast, of non-small cell adenocarcinoma lung cancer, and combined deep features with traditional image features and trained classifiers to predict short- and long-term survivors...
December 2016: Tomography: a Journal for Imaging Research
Panagiotis Korfiatis, Timothy L Kline, Bradley J Erickson
We present a deep convolutional neural network application based on autoencoders aimed at segmentation of increased signal regions in fluid-attenuated inversion recovery magnetic resonance imaging images. The convolutional autoencoders were trained on the publicly available Brain Tumor Image Segmentation Benchmark (BRATS) data set, and the accuracy was evaluated on a data set where 3 expert segmentations were available. The simultaneous truth and performance level estimation (STAPLE) algorithm was used to provide the ground truth for comparison, and Dice coefficient, Jaccard coefficient, true positive fraction, and false negative fraction were calculated...
December 2016: Tomography: a Journal for Imaging Research
Vijay Badrinarayanan, Alex Kendall, Roberto Cipolla
We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1]. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification...
January 2, 2017: IEEE Transactions on Pattern Analysis and Machine Intelligence
G Gopakumar, K Hari Babu, Deepak Mishra, Sai Siva Gorthi, Gorthi R K Sai Subrahmanyam
Cytopathologic testing is one of the most critical steps in the diagnosis of diseases, including cancer. However, the task is laborious and demands skill. Associated high cost and low throughput drew considerable interest in automating the testing process. Several neural network architectures were designed to provide human expertise to machines. In this paper, we explore and propose the feasibility of using deep-learning networks for cytopathologic analysis by performing the classification of three important unlabeled, unstained leukemia cell lines (K562, MOLT, and HL60)...
January 1, 2017: Journal of the Optical Society of America. A, Optics, Image Science, and Vision
Sheng Wang, Siqi Sun, Zhen Li, Renyu Zhang, Jinbo Xu
MOTIVATION: Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. METHOD: This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks...
January 5, 2017: PLoS Computational Biology
Kele Xu, Li Zhu, Ruixing Wang, Chang Liu, Yi Zhao
PURPOSE: In this work, we explore to use very deep convolutional neural network (CNN) for the automatic classification of diabetic retinopathy using color fundus image. METHODS: We apply translation, stretching, rotation and flipping to the labeled dataset. The original number of labeled-frames is 3000, while after augmentations, 6000 frames with labels are used for the CNN training task. Several different CNN architectures have been proposed and tested. The architecture of our network contains 18 layers with parameters, consists of 12 convolutional layers, some of which followed by max-pooling layers, and two fully connected layers...
June 2016: Medical Physics
N Antropova, B Huynh, M Giger
PURPOSE: We investigate deep learning in the task of distinguishing between malignant and benign breast lesions on dynamic contrast-enhanced MR images (DCE-MRIs), eliminating the need for lesion segmentation and extraction of tumor features. We evaluate convolutional neural network (CNN) after transfer learning with ImageNet, a database of thousands of non-medical images. METHODS: Under a HIPAA-compliant IRB protocol, a database of 551 (357 malignant and 194 benign) breast MRI cases was collected...
June 2016: Medical Physics
B Huynh, K Drukker, M Giger
PURPOSE: To assess the performance of using transferred features from pre-trained deep convolutional networks (CNNs) in the task of classifying cancer in breast ultrasound images, and to compare this method of transfer learning with previous methods involving human-designed features. METHODS: A breast ultrasound dataset consisting of 1125 cases and 2393 regions of interest (ROIs) was used. Each ROI was labeled as cystic, benign, or malignant. Features were extracted from each ROI using pre-trained CNNs and used to train support vector machine (SVM) classifiers in the tasks of distinguishing non-malignant (benign+cystic) vs malignant lesions and benign vs malignant lesions...
June 2016: Medical Physics
B Ibragimov, F Pernus, P Strojan, L Xing
PURPOSE: Accurate and efficient delineation of tumor target and organs-at-risks is essential for the success of radiotherapy. In reality, despite of decades of intense research efforts, auto-segmentation has not yet become clinical practice. In this study, we present, for the first time, a deep learning-based classification algorithm for autonomous segmentation in head and neck (HaN) treatment planning. METHODS: Fifteen HN datasets of CT, MR and PET images with manual annotation of organs-at-risk (OARs) including spinal cord, brainstem, optic nerves, chiasm, eyes, mandible, tongue, parotid glands were collected and saved in a library of plans...
June 2016: Medical Physics
N Zhu, M Najafi, S Hancock, D Hristov
PURPOSE: Robust matching of ultrasound images is a challenging problem as images of the same anatomy often present non-trivial differences. This poses an obstacle for ultrasound guidance in radiotherapy. Thus our objective is to overcome this obstacle by designing and evaluating an image blocks matching framework based on a two channel deep convolutional neural network. METHODS: We extend to 3D an algorithmic structure previously introduced for 2D image feature learning [1]...
June 2016: Medical Physics
S Suzuki, X Zhang, N Homma, K Ichiji, Y Kawasumi, T Ishibashi, M Yoshizawa
PURPOSE: To develop a deep convolutional neural network (DCNN)-based computer-aided diagnosis (CAD) system for detecting the masses in digital mammographic images. METHODS: A DCNN architecture, which consists of 5 convolutional layers and 3 fully connected layers, is constructed in this study. The DCNN parameters are then trained by the following two procedures. We first train the DCNN using about 1.3 million natural images for classification of 1,000 categories...
June 2016: Medical Physics
Peng Tang, Xinggang Wang, Bin Feng, Wenyu Liu
Finding an effective and efficient representation is very important for image classification. The most common approach is to extract a set of local descriptors, and then aggregate them into a high-dimensional, more semantic feature vector, like unsupervised Bag-of-Features (BoF) and weakly supervised partbased models. The later one is usually more discriminative than the former due to the use of information from image labels. In this work, we propose a weakly supervised strategy that using Multi-Instance Learning (MIL) to learn discriminative patterns for image representation...
December 21, 2016: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
Lequan Yu, Hao Chen, Qi Dou, Jing Qin, Pheng Ann Heng
Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition...
December 21, 2016: IEEE Transactions on Medical Imaging
Janaina Cruz Pereira, Ernesto Raúl Caffarena, Cicero Nogueira Dos Santos
In this work, we propose a deep learning approach to improve docking-based virtual screening. The deep neural network that is introduced, DeepVS, uses the output of a docking program and learns how to extract relevant features from basic data such as atom and residues types obtained from protein-ligand complexes. Our approach introduces the use of atom and amino acid embeddings and implements an effective way of creating distributed vector representations of protein-ligand complexes by modeling the compound as a set of atom contexts that is further processed by a convolutional layer...
December 27, 2016: Journal of Chemical Information and Modeling
Muhammad Jamal Afridi, Arun Ross, Xiaoming Liu, Margaret F Bennewitz, Dorela D Shuboni, Erik M Shapiro
PURPOSE: Magnetic resonance imaging (MRI)-based cell tracking has emerged as a useful tool for identifying the location of transplanted cells, and even their migration. Magnetically labeled cells appear as dark contrast in T2*-weighted MRI, with sensitivities of individual cells. One key hurdle to the widespread use of MRI-based cell tracking is the inability to determine the number of transplanted cells based on this contrast feature. In the case of single cell detection, manual enumeration of spots in three-dimensional (3D) MRI in principle is possible; however, it is a tedious and time-consuming task that is prone to subjectivity and inaccuracy on a large scale...
December 26, 2016: Magnetic Resonance in Medicine: Official Journal of the Society of Magnetic Resonance in Medicine
Husan Vokhidov, Hyung Gil Hong, Jin Kyu Kang, Toan Minh Hoang, Kang Ryoung Park
Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS), installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury...
December 16, 2016: Sensors
Hanli Wang, Peiqiu Chen, Sam Kwong
In this paper, a new optimization approach is designed for convolutional neural network (CNN) which introduces explicit logical relations between filters in the convolutional layer. In a conventional CNN, the filters' weights in convolutional layers are separately trained by their own residual errors, and the relations of these filters are not explored for learning. Different from the traditional learning mechanism, the proposed correlative filters (CFs) are initiated and trained jointly in accordance with predefined correlations, which are efficient to work cooperatively and finally make a more generalized optical system...
December 13, 2016: IEEE Transactions on Cybernetics
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"