Read by QxMD icon Read

Statistics in Medicine

Lu Wang, Ying Huang
Biomarkers are playing an increasingly important role in disease screening, early detection, and risk prediction. The two-phase case-control sampling study design is widely used for the evaluation of candidate biomarkers. The sampling probabilities for cases and controls in the second phase can often depend on other covariates (sampling strata). This biased sampling can lead to invalid inference on a biomarker's classification accuracy if not properly accounted for. In this paper, we adopt the idea of inverse probability weighting and develop inverse probability weighting-based estimators for various measures of a biomarker's classification performance, including the points on the receiver operating characteristics (ROCs) curve, the area under the ROC curve (area under the curve), and the partial area under the curve...
September 12, 2018: Statistics in Medicine
Simon Bond
A standard idealized step-wedge design satisfies the requirements, in terms of the structure of the observation units, to be considered a balanced design and can be labeled as a criss-cross design (time crossed with cluster) with replication. As such, Nelder's theory of general balance can be used to decompose the analysis of variance into independent strata (grand mean, cluster, time, cluster:time, residuals). If time is considered as a fixed effect, then the treatment effect of interest is estimated solely within the cluster and time:cluster strata; the time effects are estimated solely within the time stratum...
September 12, 2018: Statistics in Medicine
Song Yang
For testing treatment effect with survival data, the log-rank test has been the method of choice and enjoys an optimality property under proportional hazards alternatives. However, there can be significant loss of power in a variety of nonproportional situations. Yang and Prentice proposed an adaptively weighted log-rank test that improves the power of the log-rank test over a wide range of hazard ratio scenarios. In clinical trials, the data and safety monitoring board typically monitors the trial results periodically...
September 12, 2018: Statistics in Medicine
Maria Sudell, Ruwanthi Kolamunnage-Dona, François Gueyffier, Catrin Tudur Smith
BACKGROUND: Joint modeling of longitudinal and time-to-event data is often advantageous over separate longitudinal or time-to-event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time-to-event outcomes. The current literature on joint modeling focuses mainly on the analysis of single studies with a lack of methods available for the meta-analysis of joint data from multiple studies. METHODS: We investigate a variety of one-stage methods for the meta-analysis of joint longitudinal and time-to-event outcome data...
September 12, 2018: Statistics in Medicine
Alan J Girling
BACKGROUND: A cluster trial with unequal cluster sizes often has lower precision than one with equal clusters, with a corresponding inflation of the design effect. For parallel group trials, adjustments to the design effect are available under sampling models with a single intracluster correlation. Design effects for equal clusters under more complex scenarios have appeared recently (including stepped wedge trials under cross-sectional or longitudinal sampling). We investigate the impact of unequal cluster size in these more general settings...
September 12, 2018: Statistics in Medicine
Jaap Brand, Stef van Buuren, Saskia le Cessie, Wilbert van den Hout
In healthcare cost-effectiveness analysis, probability distributions are typically skewed and missing data are frequent. Bootstrap and multiple imputation are well-established resampling methods for handling skewed and missing data. However, it is not clear how these techniques should be combined. This paper addresses combining multiple imputation and bootstrap to obtain confidence intervals of the mean difference in outcome for two independent treatment groups. We assessed statistical validity and efficiency of 10 candidate methods and applied these methods to a clinical data set...
September 12, 2018: Statistics in Medicine
Dianne M Finkelstein, David A Schoenfeld
Clinical trials are often designed to compare treatments on the basis of multiple outcomes. For the analysis of the treatment comparison from such a trial, in 1999, the Finkelstein-Schoenfeld test was proposed, which was a generalization of the Gehan-Wilcoxon test based on pairwise comparison of patients on a primary outcome when possible but otherwise on a secondary outcome. In 2012, Pocock and colleagues suggested an estimate based on this concept, the Win Ratio, which summarized the ratio of the number of patients who fared better versus worse on the experimental arm...
September 11, 2018: Statistics in Medicine
Mandi Yu, Benmei Liu, Yan Li, Zhaohui Joe Zou, Nancy Breen
The relative concentration index (RCI) and the absolute concentration index (ACI) have been widely used for monitoring health disparities with ranked health determinants. The RCI has been extended to allow value judgments about inequality aversion by Pereira in 1998 and by Wagstaff in 2002. Previous studies of the extended RCI have focused on survey sample data. This paper adapts the extended RCI for use with directly standardized rates (DSRs) calculated from population-based surveillance data. A Taylor series linearization (TL)-based variance estimator is developed and evaluated using simulations...
September 11, 2018: Statistics in Medicine
Zhenzhen Xu, Yongsoek Park, Boguang Zhen, Bin Zhu
In some clinical settings such as the cancer immunotherapy trials, a treatment time-lag effect may be present and the lag duration possibly vary from subject to subject. An efficient study design and analysis procedure should not only take into account the time-lag effect but also consider the individual heterogeneity in the lag duration. In this paper, we present a Generalized Piecewise Weighted Logrank (GPW-Logrank) test, designed to account for the random time-lag effect while maximizing the study power with respect to the weights...
September 10, 2018: Statistics in Medicine
Dean Follmann, Erica Brittain, Keith Lumbard
This paper introduces a test of superiority of new anti-infective drug B over comparator drug A based on a randomized clinical trial. This test can be used to demonstrate assay (trial) sensitivity for noninferiority trials and rigorously tailor drug choice for individual patients. Our approach uses specialized baseline covariates XA ,XB , which should predict the benefits of drug A and drug B, respectively. Using a response surface model for the treatment effect, we test for superiority at the (XA ,XB ) point that is most likely to show superiority...
September 10, 2018: Statistics in Medicine
Matthias Brückner, Hans U Burger, Werner Brannath
Adaptive survival trials are particularly important for enrichment designs in oncology and other life-threatening diseases. Current statistical methodology for adaptive survival trials provide type I error rate control only under restrictions. For instance, if we use stage-wise P values based on increments of the log-rank test, then the information used for the interim decisions need to be restricted to the primary survival endpoint. However, it is often desirable to base interim decisions also on correlated short-term endpoints like tumor response...
September 6, 2018: Statistics in Medicine
Hayley E Jones, A E Ades, Alex J Sutton, Nicky J Welton
In designing a randomized controlled trial, it has been argued that trialists should consider existing evidence about the likely intervention effect. One approach is to form a prior distribution for the intervention effect based on a meta-analysis of previous studies and then power the trial on its ability to affect the posterior distribution in a Bayesian analysis. Alternatively, methods have been proposed to calculate the power of the trial to influence the "pooled" estimate in an updated meta-analysis...
September 6, 2018: Statistics in Medicine
Debashis Ghosh
In most nonrandomized observational studies, differences between treatment groups may arise not only due to the treatment but also because of the effect of confounders. Therefore, causal inference regarding the treatment effect is not as straightforward as in a randomized trial. To adjust for confounding due to measured covariates, a variety of methods based on the potential outcomes framework are used to estimate average treatment effects. One of the key assumptions is treatment positivity, which states that the probability of treatment is bounded away from zero and one for any possible combination of the confounders...
August 30, 2018: Statistics in Medicine
Wan-Chen Lee, Sanjoy K Sinha, Tye E Arbuckle, Mandy Fisher
In many biological experiments, certain values of a biomarker are often nondetectable due to low concentrations of an analyte or the limitations of a chemical analysis device, resulting in left-censored values. There is an increasing demand for the analysis of data subject to detection limits in clinical and environmental studies. In this paper, we develop a novel statistical method for the maximum likelihood estimation in generalized linear models with covariates subject to detection limits. Simulations are carried out to study the relative performance of the proposed estimators, as compared to other existing estimators...
August 30, 2018: Statistics in Medicine
Nisheet Nautiyal, Theodore R Holford
Incidence rates are an important population-level disease risk measure. Cancer incidence data in the United States, which are collected by disease registries, have been spatiotemporally sparse. Back-calculation methods can yield incidence estimates for a spatial domain by solving a convolution equation that relates mortality to incidence through survival estimates. We propose a novel back-calculation approach that uses spatiotemporal age-period-cohort (APC) modeling to estimate incidence for spatial units within a region...
August 28, 2018: Statistics in Medicine
Woojoo Lee, Arvid Sjölander, Anton Larsson, Yudi Pawitan
It is a common causal inference problem that, even with theoretically infinite samples, we might be able to only provide bounds for the parameters of interest. This problem occurs naturally, for example, in estimating causal interaction between two risk factors and in estimating the average causal effect using the instrumental variable or Mendelian randomization method. Current procedures include linear programming to get the estimated bounds, plus bootstrapping to get confidence intervals. We describe a likelihood-based procedure that automatically yields the interval estimate from the flat likelihood region and show some theory that allows us to construct confidence intervals from this non-regular likelihood...
August 28, 2018: Statistics in Medicine
K M Rhodes, R M Turner, R A Payne, I R White
Motivated by two case studies using primary care records from the Clinical Practice Research Datalink, we describe statistical methods that facilitate the analysis of tall data, with very large numbers of observations. Our focus is on investigating the association between patient characteristics and an outcome of interest, while allowing for variation among general practices. We explore ways to fit mixed-effects models to tall data, including predictors of interest and confounding factors as covariates, and including random intercepts to allow for heterogeneity in outcome among practices...
August 28, 2018: Statistics in Medicine
Ariel Alonso, Wim Van der Elst, Geert Molenberghs
The maximum entropy principle offers a constructive criterion for setting up probability distributions on the basis of partial knowledge. In the present work, the principle is applied to tackle an important problem in the surrogate marker field, namely, the evaluation of a binary outcome as a putative surrogate for a binary true endpoint within a causal inference framework. In the first step, the maximum entropy principle is used to determine the relative frequencies associated with the values of the vector of potential outcomes...
August 23, 2018: Statistics in Medicine
Nicholas C Henderson, Paul J Rathouz
In a variety of applications involving longitudinal or repeated-measurements data, it is desired to uncover natural groupings or clusters that exist among study subjects. Motivated by the need to recover clusters of longitudinal trajectories of conduct problems in the field of developmental psychopathology, we propose a method to address this goal when the response data in question are counts. We assume the subject-specific observations are generated from a first-order autoregressive process that is appropriate for count data...
August 21, 2018: Statistics in Medicine
Clara Happ, Sonja Greven, Volker J Schmid
Complex statistical models such as scalar-on-image regression often require strong assumptions to overcome the issue of nonidentifiability. While in theory, it is well understood that model assumptions can strongly influence the results, this seems to be underappreciated, or played down, in practice. This article gives a systematic overview of the main approaches for scalar-on-image regression with a special focus on their assumptions. We categorize the assumptions and develop measures to quantify the degree to which they are met...
August 21, 2018: Statistics in Medicine
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"