Read by QxMD icon Read


Brandon Koch, David M Vock, Julian Wolfson
The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this article, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models...
June 21, 2017: Biometrics
Richard Huggins, Jakub Stoklosa, Cameron Roach, Paul Yip
Sparse capture-recapture data from open populations are difficult to analyze using currently available frequentist statistical methods. However, in closed capture-recapture experiments, the Chao sparse estimator (Chao, 1989, Biometrics 45, 427-438) may be used to estimate population sizes when there are few recaptures. Here, we extend the Chao (1989) closed population size estimator to the open population setting by using linear regression and extrapolation techniques. We conduct a small simulation study and apply the models to several sparse capture-recapture data sets...
June 20, 2017: Biometrics
Ajit C Tamhane, Jiangtao Gou, Christopher Jennison, Cyrus R Mehta, Teresa Curto
Glimm et al. (2010) and Tamhane et al. (2010) studied the problem of testing a primary and a secondary endpoint, subject to a gatekeeping constraint, using a group sequential design (GSD) with K=2 looks. In this article, we greatly extend the previous results to multiple (K>2) looks. If the familywise error rate (FWER) is to be controlled at a preassigned α level then it is clear that the primary boundary must be of level α. We show under what conditions one α-level primary boundary is uniformly more powerful than another...
June 6, 2017: Biometrics
John P Buonaccorsi, Giovanni Romeo, Magne Thoresen
When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified...
May 30, 2017: Biometrics
Daniel Scharfstein, Aidan McDermott, Iván Díaz, Marco Carone, Nicola Lunardon, Ibrahim Turkoz
In practice, both testable and untestable assumptions are generally required to draw inference about the mean outcome measured at the final scheduled visit in a repeated measures study with drop-out. Scharfstein et al. (2014) proposed a sensitivity analysis methodology to determine the robustness of conclusions within a class of untestable assumptions. In their approach, the untestable and testable assumptions were guaranteed to be compatible; their testable assumptions were based on a fully parametric model for the distribution of the observable data...
May 23, 2017: Biometrics
Yifei Sun, Kwun Chuen Gary Chan, Jing Qin
Length-biased survival data subject to right-censoring are often collected from a prevalent cohort. However, informative right censoring induced by the sampling design creates challenges in methodological development. While certain conditioning arguments could circumvent the problem of informative censoring, related rank estimation methods are typically inefficient because the marginal likelihood of the backward recurrence time is not ancillary. Under a semiparametric accelerated failure time model, an overidentified set of log-rank estimating equations is constructed based on the left-truncated right-censored data and backward recurrence time...
May 15, 2017: Biometrics
David I Warton
While data transformation is a common strategy to satisfy linear modeling assumptions, a theoretical result is used to show that transformation cannot reasonably be expected to stabilize variances for small counts. Under broad assumptions, as counts get smaller, it is shown that the variance becomes proportional to the mean under monotonic transformations g(·) that satisfy g(0)=0, excepting a few pathological cases. A suggested rule-of-thumb is that if many predicted counts are less than one then data transformation cannot reasonably be expected to stabilize variances, even for a well-chosen transformation...
May 15, 2017: Biometrics
Silvia Montagna, Tor Wager, Lisa Feldman Barrett, Timothy D Johnson, Thomas E Nichols
Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the article are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to (i) identify areas of consistent activation; and (ii) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA...
May 12, 2017: Biometrics
Shigeyuki Matsui, Hisashi Noma, Pingping Qu, Yoshio Sakai, Kota Matsui, Christoph Heuck, John Crowley
This article proposes an efficient approach to screening genes associated with a phenotypic variable of interest in genomic studies with subgroups. In order to capture and detect various association profiles across subgroups, we flexibly estimate the underlying effect size distribution across subgroups using a semi-parametric hierarchical mixture model for subgroup-specific summary statistics from independent subgroups. We then perform gene ranking and selection using an optimal discovery procedure based on the fitted model with control of false discovery rate...
May 12, 2017: Biometrics
Hung Hung, Zhi-Yu Jou, Su-Yun Huang
Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence...
May 10, 2017: Biometrics
Torben Martinussen, Stijn Vansteelandt, Eric J Tchetgen Tchetgen, David M Zucker
The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elements of randomization, either by design or by nature (e.g., random inheritance of genes). Instrumental variables estimation of exposure effects is well established for continuous outcomes and to some extent for binary outcomes...
May 10, 2017: Biometrics
Audrey Mauguen, Venkatraman E Seshan, Irina Ostrovnaya, Colin B Begg
Next generation sequencing panels are being used increasingly in cancer research to study tumor evolution. A specific statistical challenge is to compare the mutational profiles in different tumors from a patient to determine the strength of evidence that the tumors are clonally related, that is, derived from a single, founder clonal cell. The presence of identical mutations in each tumor provides evidence of clonal relatedness, although the strength of evidence from a match is related to how commonly the mutation is seen in the tumor type under investigation...
May 8, 2017: Biometrics
Laura L E Cowen, Panagiotis Besbeas, Byron J T Morgan, Carl J Schwarz
Batch marking provides an important and efficient way to estimate the survival probabilities and population sizes of wild animals. It is particularly useful when dealing with animals that are difficult to mark individually. For the first time, we provide the likelihood for extended batch-marking experiments. It is often the case that samples contain individuals that remain unmarked, due to time and other constraints, and this information has not previously been analyzed. We provide ways of modeling such information, including an open N-mixture approach...
May 8, 2017: Biometrics
Sandra E Safo, Shuzhao Li, Qi Long
Integrative analysis of high dimensional omics data is becoming increasingly popular. At the same time, incorporating known functional relationships among variables in analysis of omics data has been shown to help elucidate underlying mechanisms for complex diseases. In this article, our goal is to assess association between transcriptomic and metabolomic data from a Predictive Health Institute (PHI) study that includes healthy adults at a high risk of developing cardiovascular diseases. Adopting a strategy that is both data-driven and knowledge-based, we develop statistical methods for sparse canonical correlation analysis (CCA) with incorporation of known biological information...
May 8, 2017: Biometrics
Meihua Wu, Ana Diez-Roux, Trivellore E Raghunathan, Brisa N Sánchez
A critical component of longitudinal study design involves determining the sampling schedule. Criteria for optimal design often focus on accurate estimation of the mean profile, although capturing the between-subject variance of the longitudinal process is also important since variance patterns may be associated with covariates of interest or predict future outcomes. Existing design approaches have limited applicability when one wishes to optimize sampling schedules to capture between-individual variability...
May 8, 2017: Biometrics
Nabihah Tayob, Francesco Stingo, Kim-Anh Do, Anna S F Lok, Ziding Feng
Advanced hepatocellular carcinoma (HCC) has limited treatment options and poor survival, therefore early detection is critical to improving the survival of patients with HCC. Current guidelines for high-risk patients include ultrasound screenings every six months, but ultrasounds are operator dependent and not sensitive for early HCC. Serum α-Fetoprotein (AFP) is a widely used diagnostic biomarker, but it has limited sensitivity and is not elevated in all HCC cases so, we incorporate a second blood-based biomarker, des'γ carboxy-prothrombin (DCP), that has shown potential as a screening marker for HCC...
May 8, 2017: Biometrics
Scott A Bruce, Martica H Hall, Daniel J Buysse, Robert T Krafty
Many studies of biomedical time series signals aim to measure the association between frequency-domain properties of time series and clinical and behavioral covariates. However, the time-varying dynamics of these associations are largely ignored due to a lack of methods that can assess the changing nature of the relationship through time. This article introduces a method for the simultaneous and automatic analysis of the association between the time-varying power spectrum and covariates, which we refer to as conditional adaptive Bayesian spectrum analysis (CABS)...
May 8, 2017: Biometrics
Carmen D Tekwe, Roger S Zoh, Fuller W Bazer, Guoyao Wu, Raymond J Carroll
Objective measures of oxygen consumption and carbon dioxide production by mammals are used to predict their energy expenditure. Since energy expenditure is not directly observable, it can be viewed as a latent construct with multiple physical indirect measures such as respiratory quotient, volumetric oxygen consumption, and volumetric carbon dioxide production. Metabolic rate is defined as the rate at which metabolism occurs in the body. Metabolic rate is also not directly observable. However, heat is produced as a result of metabolic processes within the body...
May 8, 2017: Biometrics
Yi-Hui Zhou, James S Marron, Fred A Wright
The issue of robustness to family relationships in computing genotype ancestry scores such as eigenvector projections has received increased attention in genetic association, and is particularly challenging when sets of both unrelated individuals and closely related family members are included. The current standard is to compute loadings (left singular vectors) using unrelated individuals and to compute projected scores for remaining family members. However, projected ancestry scores from this approach suffer from shrinkage toward zero...
April 27, 2017: Biometrics
Ziqi Chen, Man-Lai Tang, Wei Gao
Inappropriate choice of working correlation structure in generalized estimating equations (GEE) could lead to inefficient parameter estimation while impractical normality assumption in likelihood approach would limit its applicability in longitudinal data analysis. In this article, we propose a profile likelihood method for estimating parameters in longitudinal data analysis via maximizing the estimated likelihood. The proposed method yields consistent and efficient estimates without specifications of the working correlation structure nor the underlying error distribution...
April 25, 2017: Biometrics
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"