Read by QxMD icon Read

convolutional neural network

Raghav Mehta, Aabhas Majumdar, Jayanthi Sivaswamy
Automated segmentation of cortical and noncortical human brain structures has been hitherto approached using nonrigid registration followed by label fusion. We propose an alternative approach for this using a convolutional neural network (CNN) which classifies a voxel into one of many structures. Four different kinds of two-dimensional and three-dimensional intensity patches are extracted for each voxel, providing local and global (context) information to the CNN. The proposed approach is evaluated on five different publicly available datasets which differ in the number of labels per volume...
April 2017: Journal of Medical Imaging
Li Kuo Tan, Yih Miin Liew, Einly Lim, Robert A McLaughlin
Automated left ventricular (LV) segmentation is crucial for efficient quantification of cardiac function and morphology to aid subsequent management of cardiac pathologies. In this paper, we parameterize the complete (all short axis slices and phases) LV segmentation task in terms of the radial distances between the LV centerpoint and the endo- and epicardial contours in polar space. We then utilize convolutional neural network regression to infer these parameters. Utilizing parameter regression, as opposed to conventional pixel classification, allows the network to inherently reflect domain-specific physical constraints...
April 12, 2017: Medical Image Analysis
Jen-Tzung Chien, Yi-Ting Bao
The growing interests in multiway data analysis and deep learning have drawn tensor factorization (TF) and neural network (NN) as the crucial topics. Conventionally, the NN model is estimated from a set of one-way observations. Such a vectorized NN is not generalized for learning the representation from multiway observations. The classification performance using vectorized NN is constrained, because the temporal or spatial information in neighboring ways is disregarded. More parameters are required to learn the complicated data structure...
April 17, 2017: IEEE Transactions on Neural Networks and Learning Systems
Sebastien C Wong, Victor Stamatescu, Adam Gatt, David Kearney, Ivan Lee, Mark D McDonnell
This paper addresses the problem of online tracking and classification of multiple objects in an image sequence. Our proposed solution is to first track all objects in the scene without relying on object-specific prior knowledge, which in other systems can take the form of hand-crafted features or user-based track initialization. We then classify the tracked objects with a fastlearning image classifier that is based on a shallow convolutional neural network architecture and demonstrate that object recognition improves when this is combined with object state information from the tracking algorithm...
April 24, 2017: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
Yading Yuan, Ming Chao, Yeh-Chi Lo
Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this article, we present a fully automatic method for skin lesion segmentation by leveraging a 19-layer deep convolutional neural networks (CNNs) that is trained end-to- end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data...
April 18, 2017: IEEE Transactions on Medical Imaging
Xu-Yao Zhang, Fei Yin, Yan-Ming Zhang, Cheng-Lin Liu, Yoshua Bengio
Recent deep learning based approaches have achieved great success on handwriting recognition. Chinese characters are among the most widely adopted writing systems in the world. Previous research has mainly focused on recognizing handwritten Chinese characters. However, recognition is only one aspect for understanding a language, another challenging and interesting task is to teach a machine to automatically write (pictographic) Chinese characters. In this paper, we propose a framework by using the recurrent neural network (RNN) as both a discriminative model for recognizing Chinese characters and a generative model for drawing (generating) Chinese characters...
April 18, 2017: IEEE Transactions on Pattern Analysis and Machine Intelligence
Paras Lakhani, Baskaran Sundaram
Purpose To evaluate the efficacy of deep convolutional neural networks (DCNNs) for detecting tuberculosis (TB) on chest radiographs. Materials and Methods Four deidentified HIPAA-compliant datasets were used in this study that were exempted from review by the institutional review board, which consisted of 1007 posteroanterior chest radiographs. The datasets were split into training (68.0%), validation (17.1%), and test (14.9%). Two different DCNNs, AlexNet and GoogLeNet, were used to classify the images as having manifestations of pulmonary TB or as healthy...
April 24, 2017: Radiology
Sergi Valverde, Mariano Cabezas, Eloy Roura, Sandra González-Villà, Deborah Pareto, Joan C Vilanova, Lluís Ramió-Torrentà, Àlex Rovira, Arnau Oliver, Xavier Lladó
In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small (n≤35) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data...
April 18, 2017: NeuroImage
Peerajak Witoonchart, Prabhas Chongstitvatana
In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer...
February 16, 2017: Neural Networks: the Official Journal of the International Neural Network Society
Timothy Doster, Abbie T Watnik
Orbital angular momentum (OAM) beams allow for increased channel capacity in free-space optical communication. Conventionally, these OAM beams are multiplexed together at a transmitter and then propagated through the atmosphere to a receiver where, due to their orthogonality properties, they are demultiplexed. We propose a technique to demultiplex these OAM-carrying beams by capturing an image of the unique multiplexing intensity pattern and training a convolutional neural network (CNN) as a classifier. This CNN-based demultiplexing method allows for simplicity of operation as alignment is unnecessary, orthogonality constraints are loosened, and costly optical hardware is not required...
April 20, 2017: Applied Optics
Maryam Rahnemoonfar, Clay Sheppard
Recent years have witnessed significant advancement in computer vision research based on deep learning. Success of these tasks largely depends on the availability of a large amount of training samples. Labeling the training samples is an expensive process. In this paper, we present a simulated deep convolutional neural network for yield estimation. Knowing the exact number of fruits, flowers, and trees helps farmers to make better decisions on cultivation practices, plant disease prevention, and the size of harvest labor force...
April 20, 2017: Sensors
Mark Hughes, Irene Li, Spyros Kotoulas, Toyotaro Suzumura
We present an approach to automatically classify clinical text at a sentence level. We are using deep convolutional neural networks to represent complex features. We train the network on a dataset providing a broad categorization of health information. Through a detailed evaluation, we demonstrate that our method outperforms several approaches widely used in natural language processing tasks by about 15%.
2017: Studies in Health Technology and Informatics
Oren Z Kraus, Ben T Grys, Jimmy Ba, Yolanda Chong, Brendan J Frey, Charles Boone, Brenda J Andrews
Existing computational pipelines for quantitative analysis of high-content microscopy data rely on traditional machine learning approaches that fail to accurately classify more than a single dataset without substantial tuning and training, requiring extensive analysis. Here, we demonstrate that the application of deep learning to biological image data can overcome the pitfalls associated with conventional machine learning classifiers. Using a deep convolutional neural network (DeepLoc) to analyze yeast cell images, we show improved performance over traditional approaches in the automated classification of protein subcellular localization...
April 18, 2017: Molecular Systems Biology
Sang-Il Oh, Hang-Bong Kang
Multiple-object tracking is affected by various sources of distortion, such as occlusion, illumination variations and motion changes. Overcoming these distortions by tracking on RGB frames, such as shifting, has limitations because of material distortions caused by RGB frames. To overcome these distortions, we propose a multiple-object fusion tracker (MOFT), which uses a combination of 3D point clouds and corresponding RGB frames. The MOFT uses a matching function initialized on large-scale external sequences to determine which candidates in the current frame match with the target object in the previous frame...
April 18, 2017: Sensors
Angel Cruz-Roa, Hannah Gilmore, Ajay Basavanhally, Michael Feldman, Shridar Ganesan, Natalie N C Shih, John Tomaszewski, Fabio A González, Anant Madabhushi
With the increasing ability to routinely and rapidly digitize whole slide images with slide scanners, there has been interest in developing computerized image analysis algorithms for automated detection of disease extent from digital pathology images. The manual identification of presence and extent of breast cancer by a pathologist is critical for patient management for tumor staging and assessing treatment response. However, this process is tedious and subject to inter- and intra-reader variability. For computerized methods to be useful as decision support tools, they need to be resilient to data acquired from different sources, different staining and cutting protocols and different scanners...
April 18, 2017: Scientific Reports
Jinghang Gu, Fuqing Sun, Longhua Qian, Guodong Zhou
This article describes our work on the BioCreative-V chemical-disease relation (CDR) extraction task, which employed a maximum entropy (ME) model and a convolutional neural network model for relation extraction at inter- and intra-sentence level, respectively. In our work, relation extraction between entity concepts in documents was simplified to relation extraction between entity mentions. We first constructed pairs of chemical and disease mentions as relation instances for training and testing stages, then we trained and applied the ME model and the convolutional neural network model for inter- and intra-sentence level, respectively...
January 1, 2017: Database: the Journal of Biological Databases and Curation
Chenglin Liu, Peng Cui, Tao Huang
BACKGROUND: The cell cycle-regulated genes express periodically with the cell cycle stages, and the identification and study of these genes can provide a deep understanding of the cell cycle process. Large false positives and low overlaps are big problems in cell cycle-regulated gene detection. METHODS: Here, a computational framework called DLGene was proposed for cell cycle-regulated gene detection. It is based on the convolutional neural network, a deep learning algorithm representing raw form of data pattern without assumption of their distribution...
April 17, 2017: Combinatorial Chemistry & High Throughput Screening
Teng Zhou, Guoqiang Han, Bing Nan Li, Zhizhe Lin, Edward J Ciaccio, Peter H Green, Jing Qin
BACKGROUND: Celiac disease is one of the most common diseases in the world. Capsule endoscopy is an alternative way to visualize the entire small intestine without invasiveness to the patient. It is useful to characterize celiac disease, but hours are need to manually analyze the retrospective data of a single patient. Computer-aided quantitative analysis by a deep learning method helps in alleviating the workload during analysis of the retrospective videos. METHOD: Capsule endoscopy clips from 6 celiac disease patients and 5 controls were preprocessed for training...
April 8, 2017: Computers in Biology and Medicine
Yue Huang, Han Zheng, Chi Liu, Xinghao Ding, Gustavo Rohde
Epithelium-stroma classification is a necessary preprocessing step in histopathological image analysis. Current deep learning based recognition methods for histology data require collection of large volumes of labeled data in order to train a new neural network when there are changes to the image acquisition procedure. However, it is extremely expensive for pathologists to manually label sufficient volumes of data for each pathology study in a professional manner, which results in limitations in real-world applications...
April 6, 2017: IEEE Journal of Biomedical and Health Informatics
Xueyang Fu, Jiabin Huang, Xinghao Ding, Yinghao Liao, John Paisley
We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly-sized CNN...
April 6, 2017: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"