Read by QxMD icon Read

convolutional neural network

Eisuke Ito, Takaaki Sato, Daisuke Sano, Etsuko Utagawa, Tsuyoshi Kato
A new computational method for the detection of virus particles in transmission electron microscopy (TEM) images is presented. Our approach is to use a convolutional neural network that transforms a TEM image to a probabilistic map that indicates where virus particles exist in the image. Our proposed approach automatically and simultaneously learns both discriminative features and classifier for virus particle detection by machine learning, in contrast to existing methods that are based on handcrafted features that yield many false positives and require several postprocessing steps...
January 19, 2018: Food and Environmental Virology
Seung Seog Han, Gyeong Hun Park, Woohyung Lim, Myoung Shin Kim, Jung Im Na, Ilwoo Park, Sung Eun Chang
Although there have been reports of the successful diagnosis of skin disorders using deep learning, unrealistically large clinical image datasets are required for artificial intelligence (AI) training. We created datasets of standardized nail images using a region-based convolutional neural network (R-CNN) trained to distinguish the nail from the background. We used R-CNN to generate training datasets of 49,567 images, which we then used to fine-tune the ResNet-152 and VGG-19 models. The validation datasets comprised 100 and 194 images from Inje University (B1 and B2 datasets, respectively), 125 images from Hallym University (C dataset), and 939 images from Seoul National University (D dataset)...
2018: PloS One
Fahime Sheikhzadeh, Rabab K Ward, Dirk van Niekerk, Martial Guillaud
This paper addresses the problem of quantifying biomarkers in multi-stained tissues based on the color and spatial information of microscopy images of the tissue. A deep learning-based method that can automatically localize and quantify the regions expressing biomarker(s) in any selected area on a whole slide image is proposed. The deep learning network, which we refer to as Whole Image (WI)-Net, is a fully convolutional network whose input is the true RGB color image of a tissue and output is a map showing the locations of each biomarker...
2018: PloS One
Tobias Zimmermann, Bertram Taetz, Gabriele Bleser
Human body motion analysis based on wearable inertial measurement units (IMUs) receives a lot of attention from both the research community and the and industrial community. This is due to the significant role in, for instance, mobile health systems, sports and human computer interaction. In sensor based activity recognition, one of the major issues for obtaining reliable results is the sensor placement/assignment on the body. For inertial motion capture (joint kinematics estimation) and analysis, the IMU-to-segment (I2S) assignment and alignment are central issues to obtain biomechanical joint angles...
January 19, 2018: Sensors
Grzegorz Psuj
Nowadays, there is a strong demand for inspection systems integrating both high sensitivity under various testing conditions and advanced processing allowing automatic identification of the examined object state and detection of threats. This paper presents the possibility of utilization of a magnetic multi-sensor matrix transducer for characterization of defected areas in steel elements and a deep learning based algorithm for integration of data and final identification of the object state. The transducer allows sensing of a magnetic vector in a single location in different directions...
January 19, 2018: Sensors
Shancheng Fang, Hongtao Xie, Zhineng Chen, Yizhi Liu, Yan Li
How to read Uyghur text from biomedical graphic images is a challenge problem due to the complex layout and cursive writing of Uyghur. In this paper, we propose a system that extracts text from Uyghur biomedical images, and matches the text in a specific lexicon for semantic analysis. The proposed system possesses following distinctive properties: first, it is an integrated system which firstly detects and crops the Uyghur text lines using a single fully convolutional neural network, and then keywords in the lexicon are matched by a well-designed matching network...
January 19, 2018: Neuroinformatics
Nicholas Lubbers, Turab Lookman, Kipton Barros
We apply recent advances in machine learning and computer vision to a central problem in materials informatics: the statistical representation of microstructural images. We use activations in a pretrained convolutional neural network to provide a high-dimensional characterization of a set of synthetic microstructural images. Next, we use manifold learning to obtain a low-dimensional embedding of this statistical characterization. We show that the low-dimensional embedding extracts the parameters used to generate the images...
November 2017: Physical Review. E
Lukas Mosser, Olivier Dubrule, Martin J Blunt
To evaluate the variability of multiphase flow properties of porous media at the pore scale, it is necessary to acquire a number of representative samples of the void-solid structure. While modern x-ray computer tomography has made it possible to extract three-dimensional images of the pore space, assessment of the variability in the inherent material properties is often experimentally not feasible. We present a method to reconstruct the solid-void structure of porous media by applying a generative neural network that allows an implicit description of the probability distribution represented by three-dimensional image data sets...
October 2017: Physical Review. E
Ester Bonmati, Yipeng Hu, Nikhil Sindhwani, Hans Peter Dietz, Jan D'hooge, Dean Barratt, Jan Deprest, Tom Vercauteren
Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN...
April 2018: Journal of Medical Imaging
Mehmet Ufuk Dalmış, Suzan Vreemann, Thijs Kooi, Ritse M Mann, Nico Karssemeijer, Albert Gubern-Mérida
Current computer-aided detection (CADe) systems for contrast-enhanced breast MRI rely on both spatial information obtained from the early-phase and temporal information obtained from the late-phase of the contrast enhancement. However, late-phase information might not be available in a screening setting, such as in abbreviated MRI protocols, where acquisition is limited to early-phase scans. We used deep learning to develop a CADe system that exploits the spatial information obtained from the early-phase scans...
January 2018: Journal of Medical Imaging
Toshiaki Hirasawa, Kazuharu Aoyama, Tetsuya Tanimoto, Soichiro Ishihara, Satoki Shichijo, Tsuyoshi Ozawa, Tatsuya Ohnishi, Mitsuhiro Fujishiro, Keigo Matsuo, Junko Fujisaki, Tomohiro Tada
BACKGROUND: Image recognition using artificial intelligence with deep learning through convolutional neural networks (CNNs) has dramatically improved and been increasingly applied to medical fields for diagnostic imaging. We developed a CNN that can automatically detect gastric cancer in endoscopic images. METHODS: A CNN-based diagnostic system was constructed based on Single Shot MultiBox Detector architecture and trained using 13,584 endoscopic images of gastric cancer...
January 15, 2018: Gastric Cancer
Long-Gang Pang, Kai Zhou, Nan Su, Hannah Petersen, Horst Stöcker, Xin-Nian Wang
A primordial state of matter consisting of free quarks and gluons that existed in the early universe a few microseconds after the Big Bang is also expected to form in high-energy heavy-ion collisions. Determining the equation of state (EoS) of such a primordial matter is the ultimate goal of high-energy heavy-ion experiments. Here we use supervised learning with a deep convolutional neural network to identify the EoS employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in quantum chromodynamics...
January 15, 2018: Nature Communications
Quynh C Nguyen, Mehdi Sajjadi, Matt McCullough, Minh Pham, Thu T Nguyen, Weijun Yu, Hsien-Wen Meng, Ming Wen, Feifei Li, Ken R Smith, Kim Brunisholz, Tolga Tasdizen
BACKGROUND: Neighbourhood quality has been connected with an array of health issues, but neighbourhood research has been limited by the lack of methods to characterise large geographical areas. This study uses innovative computer vision methods and a new big data source of street view images to automatically characterise neighbourhood built environments. METHODS: A total of 430 000 images were obtained using Google's Street View Image API for Salt Lake City, Chicago and Charleston...
January 15, 2018: Journal of Epidemiology and Community Health
Juhua Zhang, Wenbo Peng, Lei Wang
Motivation: Nucleosome positioning plays significant roles in proper genome packing and its accessibility to execute transcription regulation. Despite a multitude of nucleosome positioning resources available on line including experimental datasets of genome-wide nucleosome occupancy profiles and computational tools to the analysis on these data, the complex language of eukaryotic Nucleosome positioning remains incompletely understood. Results: Here, we address this challenge using an approach based on a state-of-the-art machine learning method...
January 10, 2018: Bioinformatics
Paolo Napoletano, Flavio Piccoli, Raimondo Schettini
Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set...
January 12, 2018: Sensors
Saeed Reza Kheradpisheh, Mohammad Ganjtabesh, Simon J Thorpe, Timothée Masquelier
Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers...
December 23, 2017: Neural Networks: the Official Journal of the International Neural Network Society
Shyam Prasad Adhikari, Changju Yang, Krzysztof Slot, Hyongsuk Kim
This paper presents a vision sensor-based solution to the challenging problem of detecting and following trails in highly unstructured natural environments like forests, rural areas and mountains, using a combination of a deep neural network and dynamic programming. The deep neural network (DNN) concept has recently emerged as a very effective tool for processing vision sensor signals. A patch-based DNN is trained with supervised data to classify fixed-size image patches into "trail" and "non-trail" categories, and reshaped to a fully convolutional architecture to produce trail segmentation map for arbitrary-sized input images...
January 10, 2018: Sensors
Seigo Ito, Shigeyoshi Hiratsuka, Mitsuhiko Ohta, Hiroyuki Matsubara, Masaru Ogawa
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light...
January 10, 2018: Sensors
Pai Peng, Xiaojin Zhao, Xiaofang Pan, Wenbin Ye
In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data...
January 8, 2018: Sensors
Iain Marshall, Anna Noel Storr, Joël Kuiper, James Thomas, Byron C Wallace
Machine learning (ML) algorithms have proven highly accurate for identifying Randomized Controlled Trials (RCTs), but are not used much in practice, in part because the best way to make use of the technology in a typical workflow is unclear. In this work we evaluate ML models for RCT classification (Support Vector Machines [SVMs], Convolutional Neural Networks [CNNs], and ensemble approaches). We trained and optimised SVM and CNN models on the titles and abstracts of the Cochrane Crowd RCT set. We evaluated the models on an external dataset (Clinical Hedges), allowing direct comparison with traditional database search filters...
January 4, 2018: Research Synthesis Methods
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"