keyword
MENU ▼
Read by QxMD icon Read
search

convolutional neural network

keyword
https://www.readbyqxmd.com/read/28092553/deep-aesthetic-quality-assessment-with-semantic-information
#1
Yueying Kao, Ran He, Kaiqi Huang
Human beings often assess the aesthetic quality of an image coupled with the identification of the image's semantic content. This paper addresses the correlation issue between automatic aesthetic quality assessment and semantic recognition. We cast the assessment problem as the main task among a multitask deep model, and argue that semantic recognition task offers the key to address this problem. Based on convolutional neural networks, we employ a single and simple multi-task framework to efficiently utilize the supervision of aesthetic and semantic labels...
January 11, 2017: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
https://www.readbyqxmd.com/read/28092518/compositional-model-based-fisher-vector-coding-for-image-classi%C3%AF-cation
#2
Lingqiao Liu, Peng Wang, Chunhua Shen, Lei Wang, Anton van den Hengel, Chao Wang, Heng Tao Shen
Deriving from the gradient vector of a generative model of local features, Fisher vector coding (FVC) has been identified as an effective coding method for image classification. Most, if not all, FVC implementations employ the Gaussian mixture model (GMM) as the generative model for local features. However, the representative power of a GMM can be limited because it essentially assumes that local features can be characterized by a fixed number of feature prototypes, and the number of prototypes is usually small in FVC...
January 10, 2017: IEEE Transactions on Pattern Analysis and Machine Intelligence
https://www.readbyqxmd.com/read/28081006/latent-feature-representation-with-depth-directional-long-term-recurrent-learning-for-breast-masses-in-digital-breast-tomosynthesis
#3
Dae Hoe Kim, Seong Tae Kim, Jung Min Chang, Yong Man Ro
Characterization of masses in computer-aided detection systems for digital breast tomosynthesis (DBT) is an important step to reduce false positive (FP) rates. To effectively differentiate masses from FPs in DBT, discriminative mass feature representation is required. In this paper, we propose a new latent feature representation boosted by depth directional long-term recurrent learning for characterizing malignant masses. The proposed network is designed to encode mass characteristics in two parts. First, 2D spatial image characteristics of DBT slices are encoded as a slice feature representation by convolutional neural network (CNN)...
February 7, 2017: Physics in Medicine and Biology
https://www.readbyqxmd.com/read/28066809/deep-feature-transfer-learning-in-combination-with-traditional-features-predicts-survival-among-patients-with-lung-adenocarcinoma
#4
Rahul Paul, Samuel H Hawkins, Yoganand Balagurunathan, Matthew B Schabath, Robert J Gillies, Lawrence O Hall, Dmitry B Goldgof
Lung cancer is the most common cause of cancer-related deaths in the USA. It can be detected and diagnosed using computed tomography images. For an automated classifier, identifying predictive features from medical images is a key concern. Deep feature extraction using pretrained convolutional neural networks (CNNs) has recently been successfully applied in some image domains. Here, we applied a pretrained CNN to extract deep features from 40 computed tomography images, with contrast, of non-small cell adenocarcinoma lung cancer, and combined deep features with traditional image features and trained classifiers to predict short- and long-term survivors...
December 2016: Tomography: a Journal for Imaging Research
https://www.readbyqxmd.com/read/28066806/automated-segmentation-of-hyperintense-regions-in-flair-mri-using-deep-learning
#5
Panagiotis Korfiatis, Timothy L Kline, Bradley J Erickson
We present a deep convolutional neural network application based on autoencoders aimed at segmentation of increased signal regions in fluid-attenuated inversion recovery magnetic resonance imaging images. The convolutional autoencoders were trained on the publicly available Brain Tumor Image Segmentation Benchmark (BRATS) data set, and the accuracy was evaluated on a data set where 3 expert segmentations were available. The simultaneous truth and performance level estimation (STAPLE) algorithm was used to provide the ground truth for comparison, and Dice coefficient, Jaccard coefficient, true positive fraction, and false negative fraction were calculated...
December 2016: Tomography: a Journal for Imaging Research
https://www.readbyqxmd.com/read/28060704/segnet-a-deep-convolutional-encoder-decoder-architecture-for-scene-segmentation
#6
Vijay Badrinarayanan, Alex Kendall, Roberto Cipolla
We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1]. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification...
January 2, 2017: IEEE Transactions on Pattern Analysis and Machine Intelligence
https://www.readbyqxmd.com/read/28059233/cytopathological-image-analysis-using-deep-learning-networks-in-microfluidic-microscopy
#7
G Gopakumar, K Hari Babu, Deepak Mishra, Sai Siva Gorthi, Gorthi R K Sai Subrahmanyam
Cytopathologic testing is one of the most critical steps in the diagnosis of diseases, including cancer. However, the task is laborious and demands skill. Associated high cost and low throughput drew considerable interest in automating the testing process. Several neural network architectures were designed to provide human expertise to machines. In this paper, we explore and propose the feasibility of using deep-learning networks for cytopathologic analysis by performing the classification of three important unlabeled, unstained leukemia cell lines (K562, MOLT, and HL60)...
January 1, 2017: Journal of the Optical Society of America. A, Optics, Image Science, and Vision
https://www.readbyqxmd.com/read/28056090/accurate-de-novo-prediction-of-protein-contact-map-by-ultra-deep-learning-model
#8
Sheng Wang, Siqi Sun, Zhen Li, Renyu Zhang, Jinbo Xu
MOTIVATION: Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. METHOD: This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks...
January 5, 2017: PLoS Computational Biology
https://www.readbyqxmd.com/read/28048716/su-f-j-04-automated-detection-of-diabetic-retinopathy-using-deep-convolutional-neural-networks
#9
Kele Xu, Li Zhu, Ruixing Wang, Chang Liu, Yi Zhao
PURPOSE: In this work, we explore to use very deep convolutional neural network (CNN) for the automatic classification of diabetic retinopathy using color fundus image. METHODS: We apply translation, stretching, rotation and flipping to the labeled dataset. The original number of labeled-frames is 3000, while after augmentations, 6000 frames with labels are used for the CNN training task. Several different CNN architectures have been proposed and tested. The architecture of our network contains 18 layers with parameters, consists of 12 convolutional layers, some of which followed by max-pooling layers, and two fully connected layers...
June 2016: Medical Physics
https://www.readbyqxmd.com/read/28048384/su-d-207b-06-predicting-breast-cancer-malignancy-on-dce-mri-data-using-pre-trained-convolutional-neural-networks
#10
N Antropova, B Huynh, M Giger
PURPOSE: We investigate deep learning in the task of distinguishing between malignant and benign breast lesions on dynamic contrast-enhanced MR images (DCE-MRIs), eliminating the need for lesion segmentation and extraction of tumor features. We evaluate convolutional neural network (CNN) after transfer learning with ImageNet, a database of thousands of non-medical images. METHODS: Under a HIPAA-compliant IRB protocol, a database of 551 (357 malignant and 194 benign) breast MRI cases was collected...
June 2016: Medical Physics
https://www.readbyqxmd.com/read/28048166/mo-de-207b-06-computer-aided-diagnosis-of-breast-ultrasound-images-using-transfer-learning-from-deep-convolutional-neural-networks
#11
B Huynh, K Drukker, M Giger
PURPOSE: To assess the performance of using transferred features from pre-trained deep convolutional networks (CNNs) in the task of classifying cancer in breast ultrasound images, and to compare this method of transfer learning with previous methods involving human-designed features. METHODS: A breast ultrasound dataset consisting of 1125 cases and 2393 regions of interest (ROIs) was used. Each ROI was labeled as cystic, benign, or malignant. Features were extracted from each ROI using pre-trained CNNs and used to train support vector machine (SVM) classifiers in the tasks of distinguishing non-malignant (benign+cystic) vs malignant lesions and benign vs malignant lesions...
June 2016: Medical Physics
https://www.readbyqxmd.com/read/28047089/th-cd-206-05-machine-learning-based-segmentation-of-organs-at-risks-for-head-and-neck-radiotherapy-planning
#12
B Ibragimov, F Pernus, P Strojan, L Xing
PURPOSE: Accurate and efficient delineation of tumor target and organs-at-risks is essential for the success of radiotherapy. In reality, despite of decades of intense research efforts, auto-segmentation has not yet become clinical practice. In this study, we present, for the first time, a deep learning-based classification algorithm for autonomous segmentation in head and neck (HaN) treatment planning. METHODS: Fifteen HN datasets of CT, MR and PET images with manual annotation of organs-at-risk (OARs) including spinal cord, brainstem, optic nerves, chiasm, eyes, mandible, tongue, parotid glands were collected and saved in a library of plans...
June 2016: Medical Physics
https://www.readbyqxmd.com/read/28046982/su-c-207b-07-deep-convolutional-neural-network-image-matching-for-ultrasound-guidance-in-radiotherapy
#13
N Zhu, M Najafi, S Hancock, D Hristov
PURPOSE: Robust matching of ultrasound images is a challenging problem as images of the same anatomy often present non-trivial differences. This poses an obstacle for ultrasound guidance in radiotherapy. Thus our objective is to overcome this obstacle by designing and evaluating an image blocks matching framework based on a two channel deep convolutional neural network. METHODS: We extend to 3D an algorithmic structure previously introduced for 2D image feature learning [1]...
June 2016: Medical Physics
https://www.readbyqxmd.com/read/28046321/we-de-207b-02-detection-of-masses-on-mammograms-using-deep-convolutional-neural-network-a-feasibility-study
#14
S Suzuki, X Zhang, N Homma, K Ichiji, Y Kawasumi, T Ishibashi, M Yoshizawa
PURPOSE: To develop a deep convolutional neural network (DCNN)-based computer-aided diagnosis (CAD) system for detecting the masses in digital mammographic images. METHODS: A DCNN architecture, which consists of 5 convolutional layers and 3 fully connected layers, is constructed in this study. The DCNN parameters are then trained by the following two procedures. We first train the DCNN using about 1.3 million natural images for classification of 1,000 categories...
June 2016: Medical Physics
https://www.readbyqxmd.com/read/28026762/learning-multi-instance-deep-discriminative-patterns-for-image-classification
#15
Peng Tang, Xinggang Wang, Bin Feng, Wenyu Liu
Finding an effective and efficient representation is very important for image classification. The most common approach is to extract a set of local descriptors, and then aggregate them into a high-dimensional, more semantic feature vector, like unsupervised Bag-of-Features (BoF) and weakly supervised partbased models. The later one is usually more discriminative than the former due to the use of information from image labels. In this work, we propose a weakly supervised strategy that using Multi-Instance Learning (MIL) to learn discriminative patterns for image representation...
December 21, 2016: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
https://www.readbyqxmd.com/read/28026754/automated-melanoma-recognition-in-dermoscopy-images-via-very-deep-residual-networks
#16
Lequan Yu, Hao Chen, Qi Dou, Jing Qin, Pheng Ann Heng
Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition...
December 21, 2016: IEEE Transactions on Medical Imaging
https://www.readbyqxmd.com/read/28024405/boosting-docking-based-virtual-screening-with-deep-learning
#17
Janaina Cruz Pereira, Ernesto Raúl Caffarena, Cicero Nogueira Dos Santos
In this work, we propose a deep learning approach to improve docking-based virtual screening. The deep neural network that is introduced, DeepVS, uses the output of a docking program and learns how to extract relevant features from basic data such as atom and residues types obtained from protein-ligand complexes. Our approach introduces the use of atom and amino acid embeddings and implements an effective way of creating distributed vector representations of protein-ligand complexes by modeling the compound as a set of atom contexts that is further processed by a convolutional layer...
December 27, 2016: Journal of Chemical Information and Modeling
https://www.readbyqxmd.com/read/28019017/intelligent-and-automatic-in-vivo-detection-and-quantification-of-transplanted-cells-in-mri
#18
Muhammad Jamal Afridi, Arun Ross, Xiaoming Liu, Margaret F Bennewitz, Dorela D Shuboni, Erik M Shapiro
PURPOSE: Magnetic resonance imaging (MRI)-based cell tracking has emerged as a useful tool for identifying the location of transplanted cells, and even their migration. Magnetically labeled cells appear as dark contrast in T2*-weighted MRI, with sensitivities of individual cells. One key hurdle to the widespread use of MRI-based cell tracking is the inability to determine the number of transplanted cells based on this contrast feature. In the case of single cell detection, manual enumeration of spots in three-dimensional (3D) MRI in principle is possible; however, it is a tedious and time-consuming task that is prone to subjectivity and inaccuracy on a large scale...
December 26, 2016: Magnetic Resonance in Medicine: Official Journal of the Society of Magnetic Resonance in Medicine
https://www.readbyqxmd.com/read/27999301/recognition-of-damaged-arrow-road-markings-by-visible-light-camera-sensor-based-on-convolutional-neural-network
#19
Husan Vokhidov, Hyung Gil Hong, Jin Kyu Kang, Toan Minh Hoang, Kang Ryoung Park
Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS), installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury...
December 16, 2016: Sensors
https://www.readbyqxmd.com/read/27992359/building-correlations-between-filters-in-convolutional-neural-networks
#20
Hanli Wang, Peiqiu Chen, Sam Kwong
In this paper, a new optimization approach is designed for convolutional neural network (CNN) which introduces explicit logical relations between filters in the convolutional layer. In a conventional CNN, the filters' weights in convolutional layers are separately trained by their own residual errors, and the relations of these filters are not explored for learning. Different from the traditional learning mechanism, the proposed correlative filters (CFs) are initiated and trained jointly in accordance with predefined correlations, which are efficient to work cooperatively and finally make a more generalized optical system...
December 13, 2016: IEEE Transactions on Cybernetics
keyword
keyword
74748
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"