Read by QxMD icon Read

Deep convolutional neural network

Hao Yang, Junran Zhang, Qihong Liu, Yi Wang
BACKGROUND: Recently, deep learning technologies have rapidly expanded into medical image analysis, including both disease detection and classification. As far as we know, migraine is a disabling and common neurological disorder, typically characterized by unilateral, throbbing and pulsating headaches. Unfortunately, a large number of migraineurs do not receive the accurate diagnosis when using traditional diagnostic criteria based on the guidelines of the International Headache Society...
October 11, 2018: Biomedical Engineering Online
Sarah S Aboutalib, Aly A Mohamed, Wendie A Berg, Margarita L Zuley, Jules H Sumkin, Shandong Wu
Purpose: False positives in digital mammography screening lead to high recall rates, resulting in unnecessary medical procedures to patients and health care costs. This study aimed to investigate the revolutionary deep learning methods to distinguish recalled but benign mammography images from negative exams and those with malignancy. Experimental Design: Deep learning convolutional neural network (CNN) models were constructed to classify mammography images into malignant (breast cancer), negative (breast cancer free), and recalled-benign categories...
October 11, 2018: Clinical Cancer Research: An Official Journal of the American Association for Cancer Research
Wei Guo, Ran Wu, Yanhua Chen, Xinyan Zhu
With the rapid development of indoor localization in recent years; signals of opportunity have become a reliable and convenient source for indoor localization. The mobile device cannot only capture images of the indoor environment in real-time, but can also obtain one or more different types of signals of opportunity as well. Based on this, we design a convolutional neural network (CNN) model that concatenates features of image data and signals of opportunity for localization by using indoor scene datasets and simulating the situation of indoor location probability...
October 10, 2018: Sensors
Zohaib Iqbal, Da Luo, Peter Henry, Samaneh Kazemifar, Timothy Rozario, Yulong Yan, Kenneth Westover, Weiguo Lu, Dan Nguyen, Troy Long, Jing Wang, Hak Choy, Steve Jiang
Deep learning has started to revolutionize several different industries, and the applications of these methods in medicine are now becoming more commonplace. This study focuses on investigating the feasibility of tracking patients and clinical staff wearing Bluetooth Low Energy (BLE) tags in a radiation oncology clinic using artificial neural networks (ANNs) and convolutional neural networks (CNNs). The performance of these networks was compared to relative received signal strength indicator (RSSI) thresholding and triangulation...
2018: PloS One
Ghanahshyam B Kshirsagar, Narendra D Londhe
The performance of an existing Devanagari Script (DS) input based P300 speller with conventional machine learning techniques suffers from low information transfer rate (ITR). This occurs due to its required large size of display i.e. 8 x 8 row-column (RC) paradigm which exhibits issues like crowding effect, adjacency, fatigue, task difficulty and required large number of trials for character recognition. For P300 detection, deep learning algorithms have shown the state of art performance compared to the conventional machine learning algorithms in the recent past...
October 9, 2018: IEEE Transactions on Bio-medical Engineering
Emily E Cust, Alice J Sweeting, Kevin Ball, Sam Robertson
Objective assessment of an athlete's performance is of importance in elite sports to facilitate detailed analysis. The implementation of automated detection and recognition of sport-specific movements overcomes the limitations associated with manual performance analysis methods. The object of this study was to systematically review the literature on machine and deep learning for sport-specific movement recognition using inertial measurement unit (IMU) and, or computer vision data inputs. A search of multiple databases was undertaken...
October 11, 2018: Journal of Sports Sciences
Valentina Pedoia, Berk Norman, Sarah N Mehany, Matthew D Bucknor, Thomas M Link, Sharmila Majumdar
BACKGROUND: Semiquantitative assessment of MRI plays a central role in musculoskeletal research; however, in the clinical setting MRI reports often tend to be subjective and qualitative. Grading schemes utilized in research are not used because they are extraordinarily time-consuming and unfeasible in clinical practice. PURPOSE: To evaluate the ability of deep-learning models to detect and stage severity of meniscus and patellofemoral cartilage lesions in osteoarthritis and anterior cruciate ligament (ACL) subjects...
October 10, 2018: Journal of Magnetic Resonance Imaging: JMRI
Shujun Liang, Fan Tang, Xia Huang, Kaifan Yang, Tao Zhong, Runyue Hu, Shangqing Liu, Xinrui Yuan, Yu Zhang
OBJECTIVE: Accurate detection and segmentation of organs at risks (OARs) in CT image is the key step for efficient planning of radiation therapy for nasopharyngeal carcinoma (NPC) treatment. We develop a fully automated deep-learning-based method (termed organs-at-risk detection and segmentation network (ODS net)) on CT images and investigate ODS net performance in automated detection and segmentation of OARs. METHODS: The ODS net consists of two convolutional neural networks (CNNs)...
October 9, 2018: European Radiology
Babak Rahmani, Damien Loterie, Georgia Konstantinou, Demetri Psaltis, Christophe Moser
Multimode fibers (MMFs) are an example of a highly scattering medium, which scramble the coherent light propagating within them to produce seemingly random patterns. Thus, for applications such as imaging and image projection through an MMF, careful measurements of the relationship between the inputs and outputs of the fiber are required. We show, as a proof of concept, that a deep neural network can learn the input-output relationship in a 0.75 m long MMF. Specifically, we demonstrate that a deep convolutional neural network (CNN) can learn the nonlinear relationships between the amplitude of the speckle pattern (phase information lost) obtained at the output of the fiber and the phase or the amplitude at the input of the fiber...
2018: Light, Science & Applications
Joseph R England, Jordan S Gross, Eric A White, Dakshesh B Patel, Jasmin T England, Phillip M Cheng
OBJECTIVE: The purpose of this study is to determine whether a deep convolutional neural network (DCNN) trained on a dataset of limited size can accurately diagnose traumatic pediatric elbow effusion on lateral radiographs. MATERIALS AND METHODS: A total of 901 lateral elbow radiographs from 882 pediatric patients who presented to the emergency department with upper extremity trauma were divided into a training set (657 images), a validation set (115 images), and an independent test set (129 images)...
October 9, 2018: AJR. American Journal of Roentgenology
Qian Tao, Wenjun Yan, Yuanyuan Wang, Elisabeth H M Paiman, Denis P Shamonin, Pankaj Garg, Sven Plein, Lu Huang, Liming Xia, Marek Sramko, Jarsolav Tintera, Albert de Roos, Hildo J Lamb, Rob J van der Geest
Purpose To develop a deep learning-based method for fully automated quantification of left ventricular (LV) function from short-axis cine MR images and to evaluate its performance in a multivendor and multicenter setting. Materials and Methods This retrospective study included cine MRI data sets obtained from three major MRI vendors in four medical centers from 2008 to 2016. Three convolutional neural networks (CNNs) with the U-NET architecture were trained on data sets of increasing variability: (a) a single-vendor, single-center, homogeneous cohort of 100 patients (CNN1); (b) a single-vendor, multicenter, heterogeneous cohort of 200 patients (CNN2); and (c) a multivendor, multicenter, heterogeneous cohort of 400 patients (CNN3)...
October 9, 2018: Radiology
Syed Muhammad Anwar, Muhammad Majid, Adnan Qayyum, Muhammad Awais, Majdi Alnowami, Muhammad Khurram Khan
The science of solving clinical problems by analyzing images generated in clinical practice is known as medical image analysis. The aim is to extract information in an affective and efficient manner for improved clinical diagnosis. The recent advances in the field of biomedical engineering have made medical image analysis one of the top research and development area. One of the reasons for this advancement is the application of machine learning techniques for the analysis of medical images. Deep learning is successfully used as a tool for machine learning, where a neural network is capable of automatically learning features...
October 8, 2018: Journal of Medical Systems
ZhiFei Lai, HuiFang Deng
Medical image classification is a key technique of Computer-Aided Diagnosis (CAD) systems. Traditional methods rely mainly on the shape, color, and/or texture features as well as their combinations, most of which are problem-specific and have shown to be complementary in medical images, which leads to a system that lacks the ability to make representations of high-level problem domain concepts and that has poor model generalization ability. Recent deep learning methods provide an effective way to construct an end-to-end model that can compute final classification labels with the raw pixels of medical images...
2018: Computational Intelligence and Neuroscience
Isadora Cardoso, Eliana Almeida, Hector Allende-Cid, Alejandro C Frery, Rangaraj M Rangayyan, Paulo M Azevedo-Marques, Heitor S Ramos
BACKGROUND:  Diffuse lung diseases (DLDs) are a diverse group of pulmonary disorders, characterized by inflammation of lung tissue, which may lead to permanent loss of the ability to breathe and death. Distinguishing among these diseases is challenging to physicians due their wide variety and unknown causes. Computer-aided diagnosis (CAD) is a useful approach to improve diagnostic accuracy, by combining information provided by experts with Machine Learning (ML) methods. OBJECTIVES:  Exploring the potential of dimensionality reduction combined with ML methods for diagnosis of DLDs; improving the classification accuracy over state-of-the-art methods...
October 8, 2018: Methods of Information in Medicine
Lei Wang, Zhu-Hong You, De-Shuang Huang, Fengfeng Zhou
Emerging evidence has shown that RNA plays a crucial role in many cellular processes, and their biological functions are primarily achieved by binding with a variety of proteins. High-throughput biological experiments provide a lot of valuable information for the initial identification of RNA-protein interactions (RPIs), but with the increasing complexity of RPIs networks, this method gradually falls into expensive and time-consuming situations. Therefore, there is an urgent need for high speed and reliable methods to predict RNA-protein interactions...
October 5, 2018: IEEE/ACM Transactions on Computational Biology and Bioinformatics
Yan Huang, Jingsong Xu, Qiang Wu, Zhedong Zheng, Zhaoxiang Zhang, Jian Zhang
Sufficient training data normally is required to train deeply learned models. However, due to the expensive manual process for labelling large number of images (i.e., annotation), the amount of available training data (i.e., real data) is always limited. To produce more data for training a deep network, Generative Adversarial Network (GAN) can be used to generate artificial sample data (i.e., generated data). However, the generated data usually does not have annotation labels. To solve this problem, in this paper, we propose a virtual label called Multi-pseudo Regularized Label (MpRL) and assign it to the generated data...
October 8, 2018: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
Yang Wen, Bin Sheng, Ping Li, Weiyao Lin, David Dagan Feng
Depth image super-resolution is a significant yet challenging task. In this paper, we introduce a novel deep color guided coarse-to-fine convolutional neural network (CNN) framework to address this problem. First, we present a datadriven filter method to approximate the ideal filter for depth image super-resolution instead of hand-designed filters. Based on large data samples, the filter learned is more accurate and stable for upsampling depth image. Second, we introduce a coarse-to-fine CNN to learn different sizes of filter kernels...
October 8, 2018: IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society
Shi Chen, Qi Zhao
Recent surge of Convolutional Neural Networks (CNNs) has brought successes among various applications. However, these successes are accompanied by a significant increase in computational cost and the demand for computational resources, which critically hampers the utilization of complex CNNs on devices with limited computational power. In this work, we propose a feature representation based layer-wise pruning method that aims at reducing complex CNNs to more compact ones with equivalent performance. Different from previous parameter pruning methods that conduct connection-wise or filter-wise pruning based on weight information, our method determines redundant parameters by investigating the features learned in the convolutional layers and the pruning process is operated at a layer level...
October 8, 2018: IEEE Transactions on Pattern Analysis and Machine Intelligence
Sijia Liu, Feichen Shen, Ravikumar Komandur Elayavilli, Yanshan Wang, Majid Rastegar-Mojarad, Vipin Chaudhary, Hongfang Liu
Relation extraction is an important task in the field of natural language processing. In this paper, we describe our approach for the BioCreative VI Task 5: text mining chemical-protein interactions. We investigate multiple deep neural network (DNN) models, including convolutional neural networks, recurrent neural networks (RNNs) and attention-based (ATT-) RNNs (ATT-RNNs) to extract chemical-protein relations. Our experimental results indicate that ATT-RNN models outperform the same models without using attention and the ATT-gated recurrent unit (ATT-GRU) achieves the best performing micro average F1 score of 0...
January 1, 2018: Database: the Journal of Biological Databases and Curation
Imran Razzak, Muhammad Imran, Guandong Xu
Manual segmentation of the brain tumors for cancer diagnosis from MRI images is a difficult, tedious and time-consuming task. The accuracy and the robustness of brain tumor segmentation, therefore, are crucial for the diagnosis, treatment planning, and treatment outcome evaluation. Mostly, the automatic brain tumor segmentation methods use hand designed features. Similarly, traditional methods of Deep learning such as Convolutional Neural Networks require a large amount of annotated data to learn from, which is often difficult to obtain in the medical domain...
October 4, 2018: IEEE Journal of Biomedical and Health Informatics
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"