Read by QxMD icon Read


Kevin L Keys, Gary K Chen, Kenneth Lange
A genome-wide association study (GWAS) correlates marker and trait variation in a study sample. Each subject is genotyped at a multitude of SNPs (single nucleotide polymorphisms) spanning the genome. Here, we assume that subjects are randomly collected unrelateds and that trait values are normally distributed or can be transformed to normality. Over the past decade, geneticists have been remarkably successful in applying GWAS analysis to hundreds of traits. The massive amount of data produced in these studies present unique computational challenges...
September 6, 2017: Genetic Epidemiology
Muaaz Gul Awan, Fahad Saeed
Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra...
August 2017: ACM-BCB: ACM Conference on Bioinformatics, Computational Biology and Biomedicine
Markus Hadwiger, Ali K Al-Awami, Johanna Beyer, Marco Agus, Hanspeter Pfister
Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently...
August 29, 2017: IEEE Transactions on Visualization and Computer Graphics
Siavash Malektaji, Ivan T Lima, Mauricio R Escobar I, Sherif S Sherif
BACKGROUND AND OBJECTIVE: An accurate and practical simulator for Optical Coherence Tomography (OCT) could be an important tool to study the underlying physical phenomena in OCT such as multiple light scattering. Recently, many researchers have investigated simulation of OCT of turbid media, e.g., tissue, using Monte Carlo methods. The main drawback of these earlier simulators is the long computational time required to produce accurate results. We developed a massively parallel simulator of OCT of inhomogeneous turbid media that obtains both Class I diffusive reflectivity, due to ballistic and quasi-ballistic scattered photons, and Class II diffusive reflectivity due to multiply scattered photons...
October 2017: Computer Methods and Programs in Biomedicine
Jyh-Da Wei, Hui-Jun Cheng, Chun-Yuan Lin, Jin Ye, Kuan-Yu Yeh
High-end graphics processing units (GPUs), such as NVIDIA Tesla/Fermi/Kepler series cards with thousands of cores per chip, are widely applied to high-performance computing fields in a decade. These desktop GPU cards should be installed in personal computers/servers with desktop CPUs, and the cost and power consumption of constructing a GPU cluster platform are very high. In recent years, NVIDIA releases an embedded board, called Jetson Tegra K1 (TK1), which contains 4 ARM Cortex-A15 CPUs and 192 Compute Unified Device Architecture cores (belong to Kepler GPUs)...
2017: Evolutionary Bioinformatics Online
Sascha Brück, Mauro Calderara, Mohammad Hossein Bani-Hashemian, Joost VandeVondele, Mathieu Luisier
Massively parallel algorithms are presented in this paper to reduce the computational burden associated with quantum transport simulations from first-principles. The power of modern hybrid computer architectures is harvested in order to determine the open boundary conditions that connect the simulation domain with its environment and to solve the resulting Schrödinger equation. While the former operation takes the form of an eigenvalue problem that is solved by a contour integration technique on the available central processing units (CPUs), the latter can be cast into a linear system of equations that is simultaneously processed by SplitSolve, a two-step algorithm, on general-purpose graphics processing units (GPUs)...
August 21, 2017: Journal of Chemical Physics
Mona Riemenschneider, Alexander Herbst, Ari Rasch, Sergei Gorlatch, Dominik Heider
BACKGROUND: Multi-label classification has recently gained great attention in diverse fields of research, e.g., in biomedical application such as protein function prediction or drug resistance testing in HIV. In this context, the concept of Classifier Chains has been shown to improve prediction accuracy, especially when applied as Ensemble Classifier Chains. However, these techniques lack computational efficiency when applied on large amounts of data, e.g., derived from next-generation sequencing experiments...
August 17, 2017: BMC Bioinformatics
David S Lawrie
Forward Wright-Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright-Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called "embarrassingly parallel," consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency...
September 7, 2017: G3: Genes—Genomes—Genetics
Kui Wu, Cem Yuksel
Modeling cloth with fiber-level geometry can produce highly realistic details. However, rendering fiber-level cloth models not only has a high memory cost but it also has a high computation cost even for offline rendering applications. In this paper we present a real-time fiber-level cloth rendering method for current GPUs. Our method procedurally generates fiber-level geometric details on-the-fly using yarn-level control points for minimizing the data transfer to the GPU. We also reduce the rasterization operations by collectively representing the fibers near the center of each ply that form the yarn structure...
July 26, 2017: IEEE Transactions on Visualization and Computer Graphics
Peter Eastman, Jason Swails, John D Chodera, Robert T McGibbon, Yutong Zhao, Kyle A Beauchamp, Lee-Ping Wang, Andrew C Simmonett, Matthew P Harrigan, Chaya D Stern, Rafal P Wiewiora, Bernard R Brooks, Vijay S Pande
OpenMM is a molecular dynamics simulation toolkit with a unique focus on extensibility. It allows users to easily add new features, including forces with novel functional forms, new integration algorithms, and new simulation protocols. Those features automatically work on all supported hardware types (including both CPUs and GPUs) and perform well on all of them. In many cases they require minimal coding, just a mathematical description of the desired function. They also require no modification to OpenMM itself and can be distributed independently of OpenMM...
July 2017: PLoS Computational Biology
Monica Abella, Estefania Serrano, Javier Garcia-Blas, Ines García, Claudia de Molina, Jesus Carretero, Manuel Desco
The availability of digital X-ray detectors, together with advances in reconstruction algorithms, creates an opportunity for bringing 3D capabilities to conventional radiology systems. The downside is that reconstruction algorithms for non-standard acquisition protocols are generally based on iterative approaches that involve a high computational burden. The development of new flexible X-ray systems could benefit from computer simulations, which may enable performance to be checked before expensive real systems are implemented...
2017: PloS One
Jingjing Jiang, Linda Ahnen, Alexander Kalyanov, Scott Lindner, Martin Wolf, Salvador Sanchez Majos
The accuracy of images obtained by Diffuse Optical Tomography (DOT) could be substantially increased by the newly developed time resolved (TR) cameras. These devices result in unprecedented data volumes, which present a challenge to conventional image reconstruction techniques. In addition, many clinical applications require taking photons in air regions like the trachea into account, where the diffusion model fails. Image reconstruction techniques based on photon tracking are mandatory in those cases but have not been implemented so far due to computing demands...
2017: Advances in Experimental Medicine and Biology
Emre O Neftci, Charles Augustine, Somnath Paul, Georgios Detorakis
An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware...
2017: Frontiers in Neuroscience
Leonard A Harris, Marco S Nobile, James C Pino, Alexander L R Lubbock, Daniela Besozzi, Giancarlo Mauri, Paolo Cazzaniga, Carlos F Lopez
Summary: A major barrier to the practical utilization of large, complex models of biochemical systems is the lack of open-source computational tools to evaluate model behaviors over high-dimensional parameter spaces. This is due to the high computational expense of performing thousands to millions of model simulations required for statistical analysis. To address this need, we have implemented a user-friendly interface between cupSODA, a GPU-powered kinetic simulator, and PySB, a Python-based modeling and simulation framework...
June 28, 2017: Bioinformatics
J M Castillo-Secilla, M Saval-Calvo, L Medina-Valdès, S Cuenca-Asensi, A Martínez-Álvarez, C Sánchez, G Cristóbal
In this paper we present a method for autofocusing images of sputum smears taken from a microscope which combines the finding of the optimal focus distance with an algorithm for extending the depth of field (EDoF). Our multifocus fusion method produces an unique image where all the relevant objects of the analyzed scene are well focused, independently to their distance to the sensor. This process is computationally expensive which makes unfeasible its automation using traditional embedded processors. For this purpose a low-cost optimized implementation is proposed using limited resources embedded GPU integrated on cutting-edge NVIDIA system on chip...
March 1, 2017: Biomedical Optics Express
R Rana, S V Setlur Nagesh, D R Bednarek, S Rudin
Scatter is one of the most important factors effecting image quality in radiography. One of the best scatter reduction methods in dynamic imaging is an anti-scatter grid. However, when used with high resolution imaging detectors these grids may leave grid-line artifacts with increasing severity as detector resolution improves. The presence of such artifacts can mask important details in the image and degrade image quality. We have previously demonstrated that, in order to remove these artifacts, one must first subtract the residual scatter that penetrates through the grid followed by dividing out a reference grid image; however, this correction must be done fast so that corrected images can be provided in real-time to clinicians...
February 11, 2017: Proceedings of SPIE
Rok Češnovar, Erik Štrumbelj
We describe an efficient Bayesian parallel GPU implementation of two classic statistical models-the Lasso and multinomial logistic regression. We focus on parallelizing the key components: matrix multiplication, matrix inversion, and sampling from the full conditionals. Our GPU implementations of Bayesian Lasso and multinomial logistic regression achieve 100-fold speedups on mid-level and high-end GPUs. Substantial speedups of 25 fold can also be achieved on older and lower end GPUs. Samplers are implemented in OpenCL and can be used on any type of GPU and other types of computational units, thereby being convenient and advantageous in practice compared to related work...
2017: PloS One
Wensen Feng, Peng Qiao, Yunjin Chen
The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts...
June 20, 2017: IEEE Transactions on Cybernetics
Rafael S Oliveira, Bernardo M Rocha, Denise Burgarelli, Wagner Meira, Christakis Constantinides, Rodrigo Weber Dos Santos
The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. In order to speed up cardiac simulations and to allow more precise and realistic uses, two different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs), and a sophisticated numerical method based on a new space-time adaptive algorithm...
June 21, 2017: International Journal for Numerical Methods in Biomedical Engineering
Jörg Kussmann, Christian Ochsenfeld
We present a parallel integral algorithm for two-electron contributions occurring in Hartree-Fock and hybrid density functional theory that allows for a strong scaling parallelization on inhomogeneous compute clusters. With a particular focus on graphic processing units, we show that our approach allows an efficient use of CPUs and graphics processing units (GPUs) simultaneously, although the different architectures demand conflictive strategies in order to ensure efficient program execution. Furthermore, we present a general strategy to use large basis sets like quadruple-ζ split valence on GPUs and investigate the balance between CPUs and GPUs depending on l-quantum numbers of the corresponding basis functions...
June 21, 2017: Journal of Chemical Theory and Computation
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"