journal
MENU ▼
Read by QxMD icon Read
search

Statistics in Medicine

journal
https://www.readbyqxmd.com/read/28513091/estimation-of-exposure-distribution-adjusting-for-association-between-exposure-level-and-detection-limit
#1
Yuchen Yang, Brent J Shelton, Thomas T Tucker, Li Li, Richard Kryscio, Li Chen
In environmental exposure studies, it is common to observe a portion of exposure measurements to fall below experimentally determined detection limits (DLs). The reverse Kaplan-Meier estimator, which mimics the well-known Kaplan-Meier estimator for right-censored survival data with the scale reversed, has been recommended for estimating the exposure distribution for the data subject to DLs because it does not require any distributional assumption. However, the reverse Kaplan-Meier estimator requires the independence assumption between the exposure level and DL and can lead to biased results when this assumption is violated...
May 16, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28497561/on-assessing-bioequivalence-and-interchangeability-between-generics-based-on-indirect-comparisons
#2
Jiayin Zheng, Shein-Chung Chow, Mengdie Yuan
As more and more generics become available in the market place, the safety/efficacy concerns may arise as the result of interchangeably use of approved generics. However, bioequivalence assessment for regulatory approval among generics of the innovative drug product is not required. In practice, approved generics are often used interchangeably without any mechanism of safety monitoring. In this article, based on indirect comparisons, we proposed several methods to assessing bioequivalence and interchangeability between generics...
May 11, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28497551/proximity-and-gravity-modeling-heaped-self-reports
#3
Chelsea McCarty Allen, Sandra D Griffith, Saul Shiffman, Daniel F Heitjan
Self-reported daily cigarette counts typically exhibit a preponderance of round numbers, a phenomenon known as heaping or digit preference. Heaping can be a substantial nuisance, as scientific interest lies in the distribution of the underlying true values rather than that of the heaped data. In principle, we can estimate parameters of the underlying distribution from heaped data if we know the conditional distribution of the heaped count given the true count, denoted the heaping mechanism (analogous to the missingness mechanism for missing data)...
May 11, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28497531/exposure-enriched-outcome-dependent-designs-for-longitudinal-studies-of-gene-environment-interaction
#4
Zhichao Sun, Bhramar Mukherjee, Jason P Estes, Pantel S Vokonas, Sung Kyun Park
Joint effects of genetic and environmental factors have been increasingly recognized in the development of many complex human diseases. Despite the popularity of case-control and case-only designs, longitudinal cohort studies that can capture time-varying outcome and exposure information have long been recommended for gene-environment (G × E) interactions. To date, literature on sampling designs for longitudinal studies of G × E interaction is quite limited. We therefore consider designs that can prioritize a subsample of the existing cohort for retrospective genotyping on the basis of currently available outcome, exposure, and covariate data...
May 11, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28493332/p-value-calibration-in-multiple-hypotheses-testing
#5
Stefano Cabras, Maria Eugenia Castellanos
As p-values are the most common measures of evidence against a hypothesis, their calibration with respect to null hypothesis conditional probability is important in order to match frequentist unconditional inference with the Bayesian ones. The Selke, Bayarri and Berger calibration is one of the most popular attempts to obtain such a calibration. This relies on the theoretical sampling null distribution of p-values, which is the well-known Uniform(0,1), but arising only for specific sampling models. We generalize this calibration by considering a sampling null distribution estimated from the data...
May 10, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28485043/validation-of-surrogate-endpoints-in-cancer-clinical-trials-via-principal-stratification-with-an-application-to-a-prostate-cancer-trial
#6
Shiro Tanaka, Yutaka Matsuyama, Yasuo Ohashi
Increasing attention has been focused on the use and validation of surrogate endpoints in cancer clinical trials. Previous literature on validation of surrogate endpoints are classified into four approaches: the proportion explained approach; the indirect effects approach; the meta-analytic approach; and the principal stratification approach. The mainstream in cancer research has seen the application of a meta-analytic approach. However, VanderWeele (2013) showed that all four of these approaches potentially suffer from the surrogate paradox...
May 8, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28480546/testing-for-changes-in-spatial-relative-risk
#7
Martin L Hazelton
The spatial relative risk function is a useful tool for describing geographical variation in disease incidence. We consider the problem of comparing relative risk functions between two time periods, with the idea of detecting alterations in the spatial pattern of disease risk irrespective of whether there has been a change in the overall incidence rate. Using case-control datasets for each period, we use kernel smoothing methods to derive a test statistic based on the difference between the log-relative risk functions, which we term the log-relative risk ratio...
May 7, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28474394/bayesian-bivariate-meta-analysis-of-diagnostic-test-studies-with-interpretable-priors
#8
Jingyi Guo, Andrea Riebler, Håvard Rue
In a bivariate meta-analysis, the number of diagnostic studies involved is often very low so that frequentist methods may result in problems. Using Bayesian inference is particularly attractive as informative priors that add a small amount of information can stabilise the analysis without overwhelming the data. However, Bayesian analysis is often computationally demanding and the selection of the prior for the covariance matrix of the bivariate structure is crucial with little data. The integrated nested Laplace approximations method provides an efficient solution to the computational issues by avoiding any sampling, but the important question of priors remain...
May 5, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28474419/online-cross-validation-based-ensemble-learning
#9
David Benkeser, Cheng Ju, Sam Lendle, Mark van der Laan
Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data...
May 4, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28470760/economic-evaluation-of-factorial-randomised-controlled-trials-challenges-methods-and-recommendations
#10
Helen Dakin, Alastair Gray
Increasing numbers of economic evaluations are conducted alongside randomised controlled trials. Such studies include factorial trials, which randomise patients to different levels of two or more factors and can therefore evaluate the effect of multiple treatments alone and in combination. Factorial trials can provide increased statistical power or assess interactions between treatments, but raise additional challenges for trial-based economic evaluations: interactions may occur more commonly for costs and quality-adjusted life-years (QALYs) than for clinical endpoints; economic endpoints raise challenges for transformation and regression analysis; and both factors must be considered simultaneously to assess which treatment combination represents best value for money...
May 3, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28470746/autoregressive-and-cross-lagged-model-for-bivariate-non-commensurate-outcomes
#11
Fei He, Armando Teixeira-Pinto, Jaroslaw Harezlak
Autoregressive and cross-lagged models have been widely used to understand the relationship between bivariate commensurate outcomes in social and behavioral sciences, but not much work has been carried out in modeling bivariate non-commensurate (e.g., mixed binary and continuous) outcomes simultaneously. We develop a likelihood-based methodology combining ordinary autoregressive and cross-lagged models with a shared subject-specific random effect in the mixed-model framework to model two correlated longitudinal non-commensurate outcomes...
May 3, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28470713/accurate-p-values-for-adaptive-designs-with-binary-endpoints
#12
Stephane Heritier, Chris J Lloyd, Serigne N Lô
Adaptive designs encompass all trials allowing various types of design modifications over the course of the trial. A key requirement for confirmatory adaptive designs to be accepted by regulators is the strong control of the family-wise error rate. This can be achieved by combining the p-values for each arm and stage to account for adaptations (including but not limited to treatment selection), sample size adaptation and multiple stages. While the theory for this is novel and well-established, in practice, these methods can perform poorly, especially for unbalanced designs and for small to moderate sample sizes...
May 3, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28470682/spline-based-self-controlled-case-series-method
#13
Yonas Ghebremichael-Weldeselassie, Heather J Whitaker, C Paddy Farrington
The self-controlled case series (SCCS) method is an alternative to study designs such as cohort and case control methods and is used to investigate potential associations between the timing of vaccine or other drug exposures and adverse events. It requires information only on cases, individuals who have experienced the adverse event at least once, and automatically controls all fixed confounding variables that could modify the true association between exposure and adverse event. Time-varying confounders such as age, on the other hand, are not automatically controlled and must be allowed for explicitly...
May 3, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28470678/learning-curve-estimation-in-medical-devices-and-procedures-hierarchical-modeling
#14
Usha S Govindarajulu, Marco Stillo, David Goldfarb, Michael E Matheny, Frederic S Resnic
In the use of medical device procedures, learning effects have been shown to be a critical component of medical device safety surveillance. To support their estimation of these effects, we evaluated multiple methods for modeling these rates within a complex simulated dataset representing patients treated by physicians clustered within institutions. We employed unique modeling for the learning curves to incorporate the learning hierarchy between institution and physicians and then modeled them within established methods that work with hierarchical data such as generalized estimating equations (GEE) and generalized linear mixed effect models...
May 3, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28464567/the-use-of-permutation-tests-for-the-analysis-of-parallel-and-stepped-wedge-cluster-randomized-trials
#15
Rui Wang, Victor De Gruttola
We investigate the use of permutation tests for the analysis of parallel and stepped-wedge cluster-randomized trials. Permutation tests for parallel designs with exponential family endpoints have been extensively studied. The optimal permutation tests developed for exponential family alternatives require information on intraclass correlation, a quantity not yet defined for time-to-event endpoints. Therefore, it is unclear how efficient permutation tests can be constructed for cluster-randomized trials with such endpoints...
May 2, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28464565/semiparametric-estimation-of-time-varying-intervention-effects-using-recurrent-event-data
#16
Jiajun Xu, K F Lam, Feng Chen, Paul Milligan, Yin Bun Cheung
We consider the estimation of the optimal interval between doses for interventions such as malaria chemoprevention and vaccine booster doses that are applied intermittently in infectious disease control. A flexible exponential-like function to model the time-varying intervention effect in the framework of Andersen-Gill model for recurrent event time data is considered. The partial likelihood estimation approach is adopted, and a large scale simulation study is carried out to evaluate the performance of the proposed method...
May 2, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28464562/pattern-mixture-models-for-clinical-validation-of-biomarkers-in-the-presence-of-missing-data
#17
Fei Gao, Jun Dong, Donglin Zeng, Alan Rong, Joseph G Ibrahim
Targeted therapies for cancers are sometimes only effective in a subset of patients with a particular biomarker status. In clinical development, the biomarker status is typically determined by an investigational-use-only/laboratory-developed test. A market ready test (MRT) is developed later to meet regulatory requirements and for future commercial use. In the USA, the clinical validation of MRT showing efficacy and safety profile of the targeted therapy in the biomarker subgroups determined by MRT is needed for pre-market approval...
May 2, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28464332/a-comparison-of-risk-prediction-methods-using-repeated-observations-an-application-to-electronic-health-records-for-hemodialysis
#18
Benjamin A Goldstein, Gina Maria Pomann, Wolfgang C Winkelmayer, Michael J Pencina
An increasingly important data source for the development of clinical risk prediction models is electronic health records (EHRs). One of their key advantages is that they contain data on many individuals collected over time. This allows one to incorporate more clinical information into a risk model. However, traditional methods for developing risk models are not well suited to these irregularly collected clinical covariates. In this paper, we compare a range of approaches for using longitudinal predictors in a clinical risk model...
May 2, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28444781/constructing-longitudinal-disease-progression-curves-using-sparse-short-term-individual-data-with-an-application-to-alzheimer-s-disease
#19
C A Budgeon, K Murray, B A Turlach, S Baker, V L Villemagne, S C Burnham
In epidemiology, cohort studies utilised to monitor and assess disease status and progression often result in short-term and sparse follow-up data. Thus, gaining an understanding of the full-term disease pathogenesis can be difficult, requiring shorter-term data from many individuals to be collated. We investigate and evaluate methods to construct and quantify the underlying long-term longitudinal trajectories for disease markers using short-term follow-up data, specifically applied to Alzheimer's disease. We generate individuals' follow-up data to investigate approaches to this problem adopting a four-step modelling approach that (i) determines individual slopes and anchor points for their short-term trajectory, (ii) fits polynomials to these slopes and anchor points, (iii) integrates the reciprocated polynomials and (iv) inverts the resulting curve providing an estimate of the underlying longitudinal trajectory...
April 25, 2017: Statistics in Medicine
https://www.readbyqxmd.com/read/28436050/comparing-performance-of-surgeons-using-risk-adjusted-procedures
#20
Xu Tang, Fah F Gan
It is naive and incorrect to use the proportions of successful operations to compare the performance of surgeons because the patients' risk profiles are different. In this paper, we explore the use of risk-adjusted procedures to compare the performance of surgeons. One such risk-adjusted statistic is the standardized mortality ratio (SMR), which measures the performance of a surgeon adjusted for the risks of patients assuming the average performance of a group of surgeons. Unlike the traditional SMR which is defined based on a population, this SMR is a random variable...
April 24, 2017: Statistics in Medicine
journal
journal
28444
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"