Read by QxMD icon Read

Pharmaceutical Statistics

Laura Flight, Steven A Julious
No abstract text is available yet for this article.
December 2, 2016: Pharmaceutical Statistics
Ann-Kristin Leuchs, Andreas Brandt, Jörg Zinserling, Norbert Benda
Randomized controlled trials (RCTs) aim at providing reliable estimates of treatment benefit. Missing data and nonadherence to treatment are distinct problems that can substantially impede this task. In practice, the fact that the handling of missing data due to nonadherence affects the question that is addressed is often ignored. Estimands allow precisely predefining the question of interest. Estimands are definitions of that which is being estimated with regard to population, endpoint, and handling of postrandomization events (eg, nonadherence)...
December 2, 2016: Pharmaceutical Statistics
Shinjo Yada, Chikuma Hamada
Treatment during cancer clinical trials sometimes involves the combination of multiple drugs. In addition, in recent years there has been a trend toward phase I/II trials in which a phase I and a phase II trial are combined into a single trial to accelerate drug development. Methods for the seamless combination of phases I and II parts are currently under investigation. In the phase II part, adaptive randomization on the basis of patient efficacy outcomes allocates more patients to the dose combinations considered to have higher efficacy...
November 28, 2016: Pharmaceutical Statistics
Xiaoping Xiong, Jianrong Wu
The treatment of cancer has progressed dramatically in recent decades, such that it is no longer uncommon to see a cure or log-term survival in a significant proportion of patients with various types of cancer. To adequately account for the cure fraction when designing clinical trials, the cure models should be used. In this article, a sample size formula for the weighted log-rank test is derived under the fixed alternative hypothesis for the proportional hazards cure models. Simulation showed that the proposed sample size formula provides an accurate estimation of sample size for designing clinical trials under the proportional hazards cure models...
November 8, 2016: Pharmaceutical Statistics
O'Kelly M, Anisimov V, Campbell C, Hamilton S
Modelling and simulation has been used in many ways when developing new treatments. To be useful and credible, it is generally agreed that modelling and simulation should be undertaken according to some kind of best practice. A number of authors have suggested elements required for best practice in modelling and simulation. Elements that have been suggested include the pre-specification of goals, assumptions, methods, and outputs. However, a project that involves modelling and simulation could be simple or complex and could be of relatively low or high importance to the project...
November 3, 2016: Pharmaceutical Statistics
Andrew P Grieve
The past 15 years has seen many pharmaceutical sponsors consider and implement adaptive designs (AD) across all phases of drug development. Given their arrival at the turn of the millennium, we might think that they are a recent invention. That is not the case. The earliest idea of an AD predates Bradford Hill's MRC tuberculosis study, appearing in Biometrika in 1933. In this paper, we trace the development of response-ADs, designs in which the allocation to intervention arms depends on the responses of subjects already treated...
October 12, 2016: Pharmaceutical Statistics
Simon Kirby, Christy Chuang-Stein
The first trial of clinical efficacy is an important step in the development of a compound. Such a trial gives the first indication of whether a compound is likely to have the efficacy needed to be successful. Good decisions dictate that good compounds have a large probability of being progressed and poor compounds have a large probability of being stopped. In this paper, we consider and contrast five approaches to decision-making that have been used. To illustrate the use of the five approaches, we conduct a comparison for two plausible scenarios with associated assumptions for sample sizing...
September 28, 2016: Pharmaceutical Statistics
Francesca Matano, Valeria Sambucini
In phase II single-arm studies, the response rate of the experimental treatment is typically compared with a fixed target value that should ideally represent the true response rate for the standard of care therapy. Generally, this target value is estimated through previous data, but the inherent variability in the historical response rate is not taken into account. In this paper, we present a Bayesian procedure to construct single-arm two-stage designs that allows to incorporate uncertainty in the response rate of the standard treatment...
November 2016: Pharmaceutical Statistics
Wim Van der Elst, Geert Molenberghs, Ralf-Dieter Hilgers, Geert Verbeke, Nicole Heussen
There are various settings in which researchers are interested in the assessment of the correlation between repeated measurements that are taken within the same subject (i.e., reliability). For example, the same rating scale may be used to assess the symptom severity of the same patients by multiple physicians, or the same outcome may be measured repeatedly over time in the same patients. Reliability can be estimated in various ways, for example, using the classical Pearson correlation or the intra-class correlation in clustered data...
November 2016: Pharmaceutical Statistics
Ming Zhou, Sudeep Kundu
Non-inferiority trials aim to demonstrate whether an experimental therapy is not unacceptably worse than an active reference therapy already in use. When applicable, a three-arm non-inferiority trial, including an experiment therapy, an active reference therapy, and a placebo, is often recommended to assess assay sensitivity and internal validity of a trial. In this paper, we share some practical considerations based on our experience from a phase III three-arm non-inferiority trial. First, we discuss the determination of the total sample size and its optimal allocation based on the overall power of the non-inferiority testing procedure and provide ready-to-use R code for implementation...
November 2016: Pharmaceutical Statistics
P Bunouf, G Molenberghs
Modern analysis of incomplete longitudinal outcomes involves formulating assumptions about the missingness mechanisms and then using a statistical method that produces valid inferences under this assumption. In this manuscript, we define missingness strategies for analyzing randomized clinical trials (RCTs) based on plausible clinical scenarios. Penalties for dropout are also introduced in an attempt to balance benefits against risks. Some missingness mechanisms are assumed to be non-future dependent, which is a subclass of missing not at random...
November 2016: Pharmaceutical Statistics
Ronald W Helms
Biostatisticians recognize the importance of precise definitions of technical terms in randomized controlled clinical trial (RCCT) protocols, statistical analysis plans, and so on, in part because definitions are a foundation for subsequent actions. Imprecise definitions can be a source of controversies about appropriate statistical methods, interpretation of results, and extrapolations to larger populations. This paper presents precise definitions of some familiar terms and definitions of some new terms, some perhaps controversial...
November 2016: Pharmaceutical Statistics
Andrew P Grieve, Shah-Jalal Sarker
There have been many approximations developed for sample sizing of a logistic regression model with a single normally-distributed stimulus. Despite this, it has been recognised that there is no consensus as to the best method. In pharmaceutical drug development, simulation provides a powerful tool to characterise the operating characteristics of complex adaptive designs and is an ideal method for determining the sample size for such a problem. In this paper, we address some issues associated with applying simulation to determine the sample size for a given power in the context of logistic regression...
November 2016: Pharmaceutical Statistics
Tarylee Reddy, Geert Molenberghs, Edmund Njeru Njagi, Marc Aerts
In longitudinal studies of biomarkers, an outcome of interest is the time at which a biomarker reaches a particular threshold. The CD4 count is a widely used marker of human immunodeficiency virus progression. Because of the inherent variability of this marker, a single CD4 count below a relevant threshold should be interpreted with caution. Several studies have applied persistence criteria, designating the outcome as the time to the occurrence of two consecutive measurements less than the threshold. In this paper, we propose a method to estimate the time to attainment of two consecutive CD4 counts less than a meaningful threshold, which takes into account the patient-specific trajectory and measurement error...
November 2016: Pharmaceutical Statistics
Akihiro Hirakawa, Hiroyuki Sato, Masahiko Gosho
Model-based dose-finding methods for a combination therapy involving two agents in phase I oncology trials typically include four design aspects namely, size of the patient cohort, three-parameter dose-toxicity model, choice of start-up rule, and whether or not to include a restriction on dose-level skipping. The effect of each design aspect on the operating characteristics of the dose-finding method has not been adequately studied. However, some studies compared the performance of rival dose-finding methods using design aspects outlined by the original studies...
November 2016: Pharmaceutical Statistics
Wei Jiang, Jonathan D Mahnken, Jianghua He, Matthew S Mayo
For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision...
November 2016: Pharmaceutical Statistics
Bernard Sébastien, David Hoffman, Clémence Rigaux, Franck Pellissier, Jérôme Msihid
This article describes how a frequentist model averaging approach can be used for concentration-QT analyses in the context of thorough QTc studies. Based on simulations, we have concluded that starting from three candidate model families (linear, exponential, and Emax) the model averaging approach leads to treatment effect estimates that are quite robust with respect to the control of the type I error in nearly all simulated scenarios; in particular, with the model averaging approach, the type I error appears less sensitive to model misspecification than the widely used linear model...
November 2016: Pharmaceutical Statistics
Kaifeng Lu
We study the properties of treatment effect estimate in terms of odds ratio at the study end point from logistic regression model adjusting for the baseline value when the underlying continuous repeated measurements follow a multivariate normal distribution. Compared with the analysis that does not adjust for the baseline value, the adjusted analysis produces a larger treatment effect as well as a larger standard error. However, the increase in standard error is more than offset by the increase in treatment effect so that the adjusted analysis is more powerful than the unadjusted analysis for detecting the treatment effect...
September 1, 2016: Pharmaceutical Statistics
Peter L Bonate
The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC...
September 1, 2016: Pharmaceutical Statistics
Gaohong Dong, Di Li, Steffen Ballerstedt, Marc Vandemeulebroecke
A composite endpoint consists of multiple endpoints combined in one outcome. It is frequently used as the primary endpoint in randomized clinical trials. There are two main disadvantages associated with the use of composite endpoints: a) in conventional analyses, all components are treated equally important; and b) in time-to-event analyses, the first event considered may not be the most important component. Recently Pocock et al. (2012) introduced the win ratio method to address these disadvantages. This method has two alternative approaches: the matched pair approach and the unmatched pair approach...
September 2016: Pharmaceutical Statistics
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"