keyword
MENU ▼
Read by QxMD icon Read
search

convolutional neural network

keyword
https://www.readbyqxmd.com/read/28339486/a-top-down-manner-based-dcnn-architecture-for-semantic-image-segmentation
#1
Kai Qiao, Jian Chen, Linyuan Wang, Lei Zeng, Bin Yan
Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods...
2017: PloS One
https://www.readbyqxmd.com/read/28335510/gender-recognition-from-human-body-images-using-visible-light-and-thermal-camera-videos-based-on-a-convolutional-neural-network-for-image-feature-extraction
#2
Dat Tien Nguyen, Ki Wan Kim, Hyung Gil Hong, Ja Hyung Koo, Min Cheol Kim, Kang Ryoung Park
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications...
March 20, 2017: Sensors
https://www.readbyqxmd.com/read/28334830/predicting-the-impact-of-non-coding-variants-on-dna-methylation
#3
Haoyang Zeng, David K Gifford
DNA methylation plays a crucial role in the establishment of tissue-specific gene expression and the regulation of key biological processes. However, our present inability to predict the effect of genome sequence variation on DNA methylation precludes a comprehensive assessment of the consequences of non-coding variation. We introduce CpGenie, a sequence-based framework that learns a regulatory code of DNA methylation using a deep convolutional neural network and uses this network to predict the impact of sequence variation on proximal CpG site DNA methylation...
March 16, 2017: Nucleic Acids Research
https://www.readbyqxmd.com/read/28333649/multi-scale-rotation-invariant-convolutional-neural-networks-for-lung-texture-classification
#4
Qiangchang Wang, Yuanjie Zheng, Gongping Yang, Weidong Jin, Xinjian Chen, Yilong Yin
We propose a new Multi-scale Rotation-invariant Convolutional Neural Network (MRCNN) model for classifying various lung tissue types on high-resolution computed tomography (HRCT). MRCNN employs Gabor-local binary pattern (Gabor-LBP) which introduces a good property in image analysis - invariance to image scales and rotations. In addition, we offer an approach to deal with the problems caused by imbalanced number of samples between different classes in most of the existing works, accomplished by changing the overlapping size between the adjacent patches...
March 21, 2017: IEEE Journal of Biomedical and Health Informatics
https://www.readbyqxmd.com/read/28333648/mobile-stride-length-estimation-with-deep-convolutional-neural-networks
#5
Julius Hannink, Thomas Kautz, Cristian Pasluosta, Jens Barth, Samuel Schulein, Karl-Gunter Gassmann, Jochen Klucken, Bjoern Eskofier
OBJECTIVE: Accurate estimation of spatial gait characteristics is critical to assess motor impairments resulting from neurological or musculoskeletal disease. Currently, however, methodological constraints limit clinical applicability of state-ofthe- art double integration approaches to gait patterns with a clear zero-velocity phase. METHODS: We describe a novel approach to stride length estimation that uses deep convolutional neural networks to map stride-specific inertial sensor data to the resulting stride length...
March 9, 2017: IEEE Journal of Biomedical and Health Informatics
https://www.readbyqxmd.com/read/28333637/multi-scale-multi-feature-context-modeling-for-scene-recognition-in-the-semantic-manifold
#6
Xinhang Song, Shuqiang Jiang, Luis Herranz
Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, etc). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own cooccurrence patterns that are consistent across the images in that category...
March 22, 2017: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
https://www.readbyqxmd.com/read/28328517/convolution-in-convolution-for-network-in-network
#7
Yanwei Pang, Manli Sun, Xiaoheng Jiang, Xuelong Li
Network in network (NiN) is an effective instance and an important extension of deep convolutional neural network consisting of alternating convolutional layers and pooling layers. Instead of using a linear filter for convolution, NiN utilizes shallow multilayer perceptron (MLP), a nonlinear function, to replace the linear filter. Because of the powerfulness of MLP and 1 x 1 convolutions in spatial domain, NiN has stronger ability of feature representation and hence results in better recognition performance...
March 16, 2017: IEEE Transactions on Neural Networks and Learning Systems
https://www.readbyqxmd.com/read/28326656/skin-cancer-diagnosed-by-using-artificial-intelligence-on-clinical-images
#8
Isaäc van der Waal
In a recent Research Letter in Nature an automated classification of a few selected skin lesions has been published, using a deep convolutional neural network (CNN) (Esteva et al, 2017). Convolutional neural network is an important innovation in the field of computer vision. A popular use is for image processing, e.g. applied in face recognition. In the reported study CNN has been applied to a dataset of almost 130,000 clinical images, including some 3,000 dermoscopic images. This article is protected by copyright...
March 22, 2017: Oral Diseases
https://www.readbyqxmd.com/read/28323113/longitudinal-analysis-of-discussion-topics-in-an-online-breast-cancer-community-using-convolutional-neural-networks
#9
Shaodian Zhang, Edouard Grave, Elizabeth Sklar, Noémie Elhadad
Identifying topics of discussions in online health communities (OHC) is critical to various information extraction applications, but can be difficult because topics of OHC content are usually heterogeneous and domain-dependent. In this paper, we provide a multi-class schema, an annotated dataset, and supervised classifiers based on convolutional neural network (CNN) and other models for the task of classifying discussion topics. We apply the CNN classifier to the most popular breast cancer online community, and carry out cross-sectional and longitudinal analyses to show topic distributions and topic dynamics throughout members' participation...
March 18, 2017: Journal of Biomedical Informatics
https://www.readbyqxmd.com/read/28320666/rgbd-salient-object-detection-via-deep-fusion
#10
Liangqiong Qu, Shengfeng He, Jiawei Zhang, Jiandong Tian, Yandong Tang, Qingxiong Yang
Numerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection...
March 15, 2017: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
https://www.readbyqxmd.com/read/28316614/image-classification-using-biomimetic-pattern-recognition-with-convolutional-neural-networks-features
#11
Liangji Zhou, Qingwu Li, Guanying Huo, Yan Zhou
As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification...
2017: Computational Intelligence and Neuroscience
https://www.readbyqxmd.com/read/28315069/toolkits-and-libraries-for-deep-learning
#12
REVIEW
Bradley J Erickson, Panagiotis Korfiatis, Zeynettin Akkus, Timothy Kline, Kenneth Philbrick
Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data...
March 17, 2017: Journal of Digital Imaging: the Official Journal of the Society for Computer Applications in Radiology
https://www.readbyqxmd.com/read/28306716/localization-and-diagnosis-framework-for-pediatric-cataracts-based-on-slit-lamp-images-using-deep-features-of-a-convolutional-neural-network
#13
Xiyang Liu, Jiewei Jiang, Kai Zhang, Erping Long, Jiangtao Cui, Mingmin Zhu, Yingying An, Jia Zhang, Zhenzhen Liu, Zhuoling Lin, Xiaoyan Li, Jingjing Chen, Qianzhong Cao, Jing Li, Xiaohang Wu, Dongni Wang, Haotian Lin
Slit-lamp images play an essential role for diagnosis of pediatric cataracts. We present a computer vision-based framework for the automatic localization and diagnosis of slit-lamp images by identifying the lens region of interest (ROI) and employing a deep learning convolutional neural network (CNN). First, three grading degrees for slit-lamp images are proposed in conjunction with three leading ophthalmologists. The lens ROI is located in an automated manner in the original image using two successive applications of Candy detection and the Hough transform, which are cropped, resized to a fixed size and used to form pediatric cataract datasets...
2017: PloS One
https://www.readbyqxmd.com/read/28300783/person-recognition-system-based-on-a-combination-of-body-images-from-visible-light-and-thermal-cameras
#14
Dat Tien Nguyen, Hyung Gil Hong, Ki Wan Kim, Kang Ryoung Park
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera...
March 16, 2017: Sensors
https://www.readbyqxmd.com/read/28298702/fixed-versus-mixed-rsa-%C3%A2-explaining-visual-representations-by-fixed-and-mixed-feature-sets-from-shallow-and-deep-computational-models
#15
Seyed-Mahdi Khaligh-Razavi, Linda Henriksson, Kendrick Kay, Nikolaus Kriegeskorte
Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set...
February 2017: Journal of Mathematical Psychology
https://www.readbyqxmd.com/read/28289601/3d-scattering-transforms-for-disease-classification-in-neuroimaging
#16
Tameem Adel, Taco Cohen, Matthan Caan, Max Welling
Classifying neurodegenerative brain diseases in MRI aims at correctly assigning discrete labels to MRI scans. Such labels usually refer to a diagnostic decision a learner infers based on what it has learned from a training sample of MRI scans. Classification from MRI voxels separately typically does not provide independent evidence towards or against a class; the information relevant for classification is only present in the form of complicated multivariate patterns (or "features"). Deep learning solves this problem by learning a sequence of non-linear transformations that result in feature representations that are better suited to classification...
2017: NeuroImage: Clinical
https://www.readbyqxmd.com/read/28287968/style-transfer-via-texture-synthesis
#17
Michael Elad, Peyman Milanfar
Style transfer is a process of migrating a style from a given image to the content of another, synthesizing a new image which is an artistic mixture of the two. Recent work on this problem adopting Convolutional Neural-networks (CNN) ignited a renewed interest in this field, due to the very impressive results obtained. There exists an alternative path towards handling the style transfer task, via generalization of texture synthesis algorithms. This approach has been proposed over the years, but its results are typically less impressive compared to the CNN ones...
March 8, 2017: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
https://www.readbyqxmd.com/read/28287966/deep-learning-segmentation-of-optical-microscopy-images-improves-3d-neuron-reconstruction
#18
Rongjian Li, Tao Zeng, Hanchuan Peng, Shuiwang Ji
Digital reconstruction, or tracing, of 3-dimensional (3D) neuron structure from microscopy images is a critical step toward reversing engineering the wiring and anatomy of a brain. Despite a number of prior attempts, this task remains very challenging, especially when images are contaminated by noises or have discontinued segments of neurite patterns. An approach for addressing such problems is to identify the locations of neuronal voxels using image segmentation methods prior to applying tracing or reconstruction techniques...
March 8, 2017: IEEE Transactions on Medical Imaging
https://www.readbyqxmd.com/read/28278461/automatic-quantification-of-tumour-hypoxia-from-multi-modal-microscopy-images-using-weakly-supervised-learning-methods
#19
Gustavo Carneiro, Tingying Peng, Christine Bayer, Nassir Navab
In recently published clinical trial results, hypoxia-modified therapies have shown to provide more positive outcomes to cancer patients, compared with standard cancer treatments. The development and validation of these hypoxia-modified therapies depend on an effective way of measuring tumour hypoxia, but a standardised measurement is currently unavailable in clinical practice. Different types of manual measurements have been proposed in clinical research, but in this paper we focus on a recently published approach that quantifies the number and proportion of hypoxic regions using high resolution (immuno- ) fluorescence (IF) and hematoxylin and eosin (HE) stained images of a histological specimen of a tumour...
March 2, 2017: IEEE Transactions on Medical Imaging
https://www.readbyqxmd.com/read/28278457/collaborative-index-embedding-for-image-retrieval
#20
Wengang Zhou, Houqiang Li, Jian Sun, Qi Tian
In content-based image retrieval, SIFT feature and the feature from deep convolutional neural network (CNN) have demonstrated promising performance. To fully explore both visual features in a unified framework for effective and efficient retrieval, we propose a collaborative index embedding method to implicitly integrate the index matrices of them. We formulate the index embedding as an optimization problem from the perspective of neighborhood sharing and solve it with an alternating index update scheme. After the iterative embedding, only the embedded CNN index is kept for on-line query, which demonstrates significant gain in retrieval accuracy, with very economical memory cost...
March 1, 2017: IEEE Transactions on Pattern Analysis and Machine Intelligence
keyword
keyword
74748
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"