Read by QxMD icon Read


Shuo Chen, Yishi Xing, Jian Kang, Peter Kochunov, L Elliot Hong
Brain connectivity studies often refer to brain areas as graph nodes and connections between nodes as edges, and aim to identify neuropsychiatric phenotype-related connectivity patterns. When performing group-level brain connectivity alternation analyses, it is critical to model the dependence structure between multivariate connectivity edges to achieve accurate and efficient estimates of model parameters. However, specifying and estimating dependencies between connectivity edges presents formidable challenges because (i) the dimensionality of parameters in the covariance matrix is high (of the order of the fourth power of the number of nodes); (ii) the covariance between a pair of edges involves four nodes with spatial location information; and (iii) the dependence structure between edges can be related to unknown network topological structures...
September 10, 2018: Biostatistics
Shirin Golchi, Kristian Thorlund
Response adaptive randomized clinical trials have gained popularity due to their flexibility for adjusting design components, including arm allocation probabilities, at any point in the trial according to the intermediate results. In the Bayesian framework, allocation probabilities to different treatment arms are commonly defined as functionals of the posterior distributions of parameters of the outcome distribution for each treatment. In a non-conjugate model, however, repeated updates of the posterior distribution can be computationally intensive...
September 10, 2018: Biostatistics
Sarah Fletcher Mercaldo, Jeffrey D Blume
Missing data are a common problem for both the construction and implementation of a prediction algorithm. Pattern submodels (PS)-a set of submodels for every missing data pattern that are fit using only data from that pattern-are a computationally efficient remedy for handling missing data at both stages. Here, we show that PS (i) retain their predictive accuracy even when the missing data mechanism is not missing at random (MAR) and (ii) yield an algorithm that is the most predictive among all standard missing data strategies...
September 6, 2018: Biostatistics
Luigi Augugliaro, Antonino Abbruzzo, Veronica Vinciotti
Graphical lasso is one of the most used estimators for inferring genetic networks. Despite its diffusion, there are several fields in applied research where the limits of detection of modern measurement technologies make the use of this estimator theoretically unfounded, even when the assumption of a multivariate Gaussian distribution is satisfied. Typical examples are data generated by polymerase chain reactions and flow cytometer. The combination of censoring and high-dimensionality make inference of the underlying genetic networks from these data very challenging...
September 6, 2018: Biostatistics
Yuqing Zhang, Christoph Bernau, Giovanni Parmigiani, Levi Waldron
Cross-study validation (CSV) of prediction models is an alternative to traditional cross-validation (CV) in domains where multiple comparable datasets are available. Although many studies have noted potential sources of heterogeneity in genomic studies, to our knowledge none have systematically investigated their intertwined impacts on prediction accuracy across studies. We employ a hybrid parametric/non-parametric bootstrap method to realistically simulate publicly available compendia of microarray, RNA-seq, and whole metagenome shotgun microbiome studies of health outcomes...
September 6, 2018: Biostatistics
Katie Wilson, Jon Wakefield
The analysis of area-level aggregated summary data is common in many disciplines including epidemiology and the social sciences. Typically, Markov random field spatial models have been employed to acknowledge spatial dependence and allow data-driven smoothing. In the context of an irregular set of areas, these models always have an ad hoc element with respect to the definition of a neighborhood scheme. In this article, we exploit recent theoretical and computational advances to carry out modeling at the continuous spatial level, which induces a spatial model for the discrete areas...
September 6, 2018: Biostatistics
Frank Dondelinger, Sach Mukherjee
We consider high-dimensional regression over subgroups of observations. Our work is motivated by biomedical problems, where subsets of samples, representing for example disease subtypes, may differ with respect to underlying regression models. In the high-dimensional setting, estimating a different model for each subgroup is challenging due to limited sample sizes. Focusing on the case in which subgroup-specific models may be expected to be similar but not necessarily identical, we treat subgroups as related problem instances and jointly estimate subgroup-specific regression coefficients...
September 5, 2018: Biostatistics
Meng Xia, Susan Murray
No abstract text is available yet for this article.
September 4, 2018: Biostatistics
Pavel Mozgunov, Thomas Jaki, Xavier Paoletti
An important tool to evaluate the performance of any design is an optimal benchmark proposed by O'Quigley and others (2002. Non-parametric optimal design in dose finding studies. Biostatistics3, 51-56) that provides an upper bound on the performance of a design under a given scenario. The original benchmark can only be applied to dose finding studies with a binary endpoint. However, there is a growing interest in dose finding studies involving continuous outcomes, but no benchmark for such studies has been developed...
August 24, 2018: Biostatistics
Prithish Banerjee, Samiran Ghosh
Two-phase sampling design is a common practice in many medical studies. Generally, the first-phase classification is fallible but relatively cheap, while the accurate second phase state-of-the-art medical diagnosis is complex and rather expensive to perform. When constructed efficiently it offers great potential for higher true case detection as well as for higher precision at a limited cost. In this article, we consider epidemiological studies with two-phase sampling design. However, instead of a single two-phase study, we consider a scenario where a series of two-phase studies are done in a longitudinal fashion on a cohort of interest...
August 24, 2018: Biostatistics
Xu Guo, Yujing Gao, Cuizhen Niu, Shumei Zhang
No abstract text is available yet for this article.
August 24, 2018: Biostatistics
Pål Christie Ryalen, Mats Julius Stensrud, Sophie Fosså, Kjetil Røysland
In marginal structural models (MSMs), time is traditionally treated as a discrete parameter. In survival analysis on the other hand, we study processes that develop in continuous time. Therefore, Røysland (2011. A martingale approach to continuous-time marginal structural models. Bernoulli 17, 895-915) developed the continuous-time MSMs, along with continuous-time weights. The continuous-time weights are conceptually similar to the inverse probability weights that are used in discrete time MSMs. Here, we demonstrate that continuous-time MSMs may be used in practice...
August 16, 2018: Biostatistics
Torben Martinussen, Stijn Vansteelandt
Time-to-event analyses are often plagued by both-possibly unmeasured-confounding and competing risks. To deal with the former, the use of instrumental variables (IVs) for effect estimation is rapidly gaining ground. We show how to make use of such variables in competing risk analyses. In particular, we show how to infer the effect of an arbitrary exposure on cause-specific hazard functions under a semi-parametric model that imposes relatively weak restrictions on the observed data distribution. The proposed approach is flexible accommodating exposures and IVs of arbitrary type, and enabling covariate adjustment...
August 14, 2018: Biostatistics
Aaron Scheffler, Donatello Telesca, Qian Li, Catherine A Sugar, Charlotte Distefano, Shafali Jeste, Damla Sentürk
Electroencephalography (EEG) data possess a complex structure that includes regional, functional, and longitudinal dimensions. Our motivating example is a word segmentation paradigm in which typically developing (TD) children, and children with autism spectrum disorder (ASD) were exposed to a continuous speech stream. For each subject, continuous EEG signals recorded at each electrode were divided into one-second segments and projected into the frequency domain via fast Fourier transform. Following a spectral principal components analysis, the resulting data consist of region-referenced principal power indexed regionally by scalp location, functionally across frequencies, and longitudinally by one-second segments...
August 3, 2018: Biostatistics
Xin Zhou, Xiaomei Liao, Lauren M Kunz, Sharon-Lise T Normand, Molin Wang, Donna Spiegelman
In stepped wedge designs (SWD), clusters are randomized to the time period during which new patients will receive the intervention under study in a sequential rollout over time. By the study's end, patients at all clusters receive the intervention, eliminating ethical concerns related to withholding potentially efficacious treatments. This is a practical option in many large-scale public health implementation settings. Little statistical theory for these designs exists for binary outcomes. To address this, we utilized a maximum likelihood approach and developed numerical methods to determine the asymptotic power of the SWD for binary outcomes...
August 1, 2018: Biostatistics
Xiang Li, Donglin Zeng, Karen Marder, Yuanjia Wang
Potential disease-modifying therapies for neurodegenerative disorders need to be introduced prior to the symptomatic stage in order to be effective. However, current diagnosis of neurological disorders mostly rely on measurements of clinical symptoms and thus only identify symptomatic subjects in their late disease course. Thus, it is of interest to select and integrate biomarkers that may reflect early disease-related pathological changes for earlier diagnosis and recruiting pre-sypmtomatic subjects in a prevention clinical trial...
August 1, 2018: Biostatistics
Carlo Berzuini, Hui Guo, Stephen Burgess, Luisa Bernardinelli
We propose a Bayesian approach to Mendelian randomization (MR), where instruments are allowed to exert pleiotropic (i.e. not mediated by the exposure) effects on the outcome. By having these effects represented in the model by unknown parameters, and by imposing a shrinkage prior distribution that assumes an unspecified subset of the effects to be zero, we obtain a proper posterior distribution for the causal effect of interest. This posterior can be sampled via Markov chain Monte Carlo methods of inference to obtain point and interval estimates...
August 1, 2018: Biostatistics
Rodney A Sparapani, Lisa E Rein, Sergey S Tarima, Tourette A Jackson, John R Meurer
Much of survival analysis is concerned with absorbing events, i.e., subjects can only experience a single event such as mortality. This article is focused on non-absorbing or recurrent events, i.e., subjects are capable of experiencing multiple events. Recurrent events have been studied by many; however, most rely on the restrictive assumptions of linearity and proportionality. We propose a new method for analyzing recurrent events with Bayesian Additive Regression Trees (BART) avoiding such restrictive assumptions...
July 28, 2018: Biostatistics
Nicholas C Henderson, Thomas A Louis, Gary L Rosner, Ravi Varadhan
Individuals often respond differently to identical treatments, and characterizing such variability in treatment response is an important aim in the practice of personalized medicine. In this article, we describe a nonparametric accelerated failure time model that can be used to analyze heterogeneous treatment effects (HTE) when patient outcomes are time-to-event. By utilizing Bayesian additive regression trees and a mean-constrained Dirichlet process mixture model, our approach offers a flexible model for the regression function while placing few restrictions on the baseline hazard...
July 19, 2018: Biostatistics
Wenjian Bi, Yun Li, Matthew P Smeltzer, Guimin Gao, Shengli Zhao, Guolian Kang
It has been well acknowledged that methods for secondary trait (ST) association analyses under a case-control design (ST$_{\text{CC}}$) should carefully consider the sampling process to avoid biased risk estimates. A similar situation also exists in the extreme phenotype sequencing (EPS) designs, which is to select subjects with extreme values of continuous primary phenotype for sequencing. EPS designs are commonly used in modern epidemiological and clinical studies such as the well-known National Heart, Lung, and Blood Institute Exome Sequencing Project...
July 11, 2018: Biostatistics
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"