journal
MENU ▼
Read by QxMD icon Read
search

Communications in Statistics: Theory and Methods

journal
https://www.readbyqxmd.com/read/29962658/an-evaluation-of-common-methods-for-dichotomization-of-continuous-variables-to-discriminate-disease-status
#1
Sybil L Prince Nelson, Viswanathan Ramakrishnan, Paul J Nietert, Diane L Kamen, Paula S Ramos, Bethany J Wolf
Dichotomization of continuous variables to discriminate a dichotomous outcome is often useful in statistical applications. If a true threshold for a continuous variable exists, the challenge is identifying it. This paper examines common methods for dichotomization to identify which ones recover a true threshold. We provide mathematical and numeric proofs demonstrating that maximizing the odds ratio, Youden's statistic, Gini Index, chi-square statistic, relative risk and kappa statistic all theoretically recover a true threshold...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/29795710/generalized-confidence-intervals-compatible-with-the-min-test-for-simultaneous-comparisons-of-one-subpopulation-to-several-other-subpopulations
#2
Julia N Soulakova
A problem where one subpopulation is compared to several other subpopulations in terms of means with the goal of estimating the smallest difference between the means commonly arises in biology, medicine, and many other scientific fields. A generalization of Strassburger, Bretz and Hochberg (2004) approach for two comparisons is presented for cases with three and more comparisons. The method allows constructing an interval-estimator for the smallest mean difference, which is compatible with the Min test. An application to a fluency-disorder study is illustrated...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/29725157/testing-homogeneity-in-semiparametric-mixture-case-control-models
#3
Chong-Zhi Di, Kwun Chuen Gary Chan, Cheng Zheng, Kung-Yee Liang
Parametric and semiparametric mixture models have been widely used in applications from many areas, and it is often of interest to test homogeneity in these models. However, hypothesis testing is nonstandard due to the fact that several regularity conditions do not hold under the null hypothesis. We consider a semiparametric mixture case-control model, in the sense that the density ratio of two distributions is assumed to be of an exponential form, while the baseline density is unspecified. This model was first considered by Qin and Liang (2011, biometrics), and they proposed a modified score statistic for testing homogeneity...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/29416225/nonparametric-manova-approaches-for-non-normal-multivariate-outcomes-with-missing-values
#4
Fanyin He, Sati Mazumdar, Gong Tang, Triptish Bhatia, Stewart J Anderson, Mary Amanda Dew, Robert Krafty, Vishwajit Nimgaonkar, Smita Deshpande, Martica Hall, Charles F Reynolds
Between-group comparisons often entail many correlated response variables. The multivariate linear model, with its assumption of multivariate normality, is the accepted standard tool for these tests. When this assumption is violated, the nonparametric multivariate Kruskal-Wallis (MKW) test is frequently used. However, this test requires complete cases with no missing values in response variables. Deletion of cases with missing values likely leads to inefficient statistical inference. Here we extend the MKW test to retain information from partially-observed cases...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/29326496/a-simple-method-for-deriving-the-confidence-regions-for-the-penalized-cox-s-model-via-the-minimand-perturbation
#5
Chen-Yen Lin, Susan Halabi
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/29081575/optimal-and-lead-in-adaptive-allocation-for-binary-outcomes-a-comparison-of-bayesian-methodologies
#6
Roy T Sabo, Ghalib Bello
We compare posterior and predictive estimators and probabilities in response-adaptive randomization designs for two- and three-group clinical trials with binary outcomes. Adaptation based upon posterior estimates are discussed, as are two predictive probability algorithms: one using the traditional definition, the other using a skeptical distribution. Optimal and natural lead-in designs are covered. Simulation studies show: efficacy comparisons lead to more adaptation than center comparisons, though at some power loss; skeptically predictive efficacy comparisons and natural lead-in approaches lead to less adaptation but offer reduced allocation variability...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/29081574/extensions-of-d-optimal-minimal-designs-for-symmetric-mixture-models
#7
Yanyan Li, Damaraju Raghavarao, Inna Chervoneva
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/28603337/sample-size-calculations-for-time-averaged-difference-of-longitudinal-binary-outcomes
#8
Ying Lou, Jing Cao, Song Zhang, Chul Ahn
In clinical trials with repeated measurements, the responses from each subject are measured multiple times during the study period. Two approaches have been widely used to assess the treatment effect, one that compares the rate of change between two groups and the other that tests the time-averaged difference (TAD). While sample size calculations based on comparing the rate of change between two groups have been reported by many investigators, the literature has paid relatively little attention to the sample size estimation for time-averaged difference (TAD) in the presence of heterogeneous correlation structure and missing data in repeated measurement studies...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/28435181/orthogonality-of-the-mean-and-error-distribution-in-generalized-linear-models
#9
Alan Huang, Paul J Rathouz
We show that the mean-model parameter is always orthogonal to the error distribution in generalized linear models. Thus, the maximum likelihood estimator of the mean-model parameter will be asymptotically efficient regardless of whether the error distribution is known completely, known up to a finite vector of parameters, or left completely unspecified, in which case the likelihood is taken to be an appropriate semiparametric likelihood. Moreover, the maximum likelihood estimator of the mean-model parameter will be asymptotically independent of the maximum likelihood estimator of the error distribution...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/28008212/consistent-model-identification-of-varying-coefficient-quantile-regression-with-bic-tuning-parameter-selection
#10
Qi Zheng, Limin Peng
Quantile regression provides a flexible platform for evaluating covariate effects on different segments of the conditional distribution of response. As the effects of covariates may change with quantile level, contemporaneously examining a spectrum of quantiles is expected to have a better capacity to identify variables with either partial or full effects on the response distribution, as compared to focusing on a single quantile. Under this motivation, we study a general adaptively weighted LASSO penalization strategy in the quantile regression setting, where a continuum of quantile index is considered and coefficients are allowed to vary with quantile index...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/27840548/t-type-corrected-loss-estimation-for-error-in-variable-model
#11
Jiao Jin, Liang Zhu, Xingwei Tong, Kirsten K Ness
In this paper, we consider a linear model in which the covariates are measured with errors. We propose a t-type corrected-loss estimation of the covariate effect, when the measurement error follows the Laplace distribution. The proposed estimator is asymptotically normal. In practical studies, some outliers that diminish the robustness of the estimation occur. Simulation studies show that the estimators are resistent to vertical outliers and an application of Six-Minute Walk test is presented to show that the proposed method performs well...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/27293307/systematically-misclassified-binary-dependent-variables
#12
Vidhura Tennekoon, Robert Rosenman
When a binary dependent variable is misclassified, that is, recorded in the category other than where it really belongs, probit and logit estimates are biased and inconsistent. In some cases the probability of misclassification may vary systematically with covariates, and thus be endogenous. In this paper we develop an estimation approach that corrects for endogenous misclassification, validate our approach using a simulation study, and apply it to the analysis of a treatment program designed to improve family dynamics...
2016: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/26924882/inversion-theorem-based-kernel-density-estimation-for-the-ordinary-least-squares-estimator-of-a-regression-coefficient
#13
Dongliang Wang, Alan D Hutson
The traditional confidence interval associated with the ordinary least squares estimator of linear regression coefficient is sensitive to non-normality of the underlying distribution. In this article, we develop a novel kernel density estimator for the ordinary least squares estimator via utilizing well-defined inversion based kernel smoothing techniques in order to estimate the conditional probability density distribution of the dependent random variable. Simulation results show that given a small sample size, our method significantly increases the power as compared with Wald-type CIs...
2015: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/26924881/an-investigation-of-quantile-function-estimators-relative-to-quantile-confidence-interval-coverage
#14
Lai Wei, Dongliang Wang, Alan D Hutson
In this article, we investigate the limitations of traditional quantile function estimators and introduce a new class of quantile function estimators, namely, the semi-parametric tail-extrapolated quantile estimators, which has excellent performance for estimating the extreme tails with finite sample sizes. The smoothed bootstrap and direct density estimation via the characteristic function methods are developed for the estimation of confidence intervals. Through a comprehensive simulation study to compare the confidence interval estimations of various quantile estimators, we discuss the preferred quantile estimator in conjunction with the confidence interval estimation method to use under different circumstances...
2015: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/26023251/moving-block-bootstrap-for-analyzing-longitudinal-data
#15
Hyunsu Ju
In a longitudinal study subjects are followed over time. I focus on a case where the number of replications over time is large relative to the number of subjects in the study. I investigate the use of moving block bootstrap methods for analyzing such data. Asymptotic properties of the bootstrap methods in this setting are derived. The effectiveness of these resampling methods is also demonstrated through a simulation study.
2015: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/25530661/sample-size-requirements-and-study-duration-for-testing-main-effects-and-interactions-in-completely-randomized-factorial-designs-when-time-to-event-is-the-outcome
#16
Barry Kurt Moser, Susan Halabi
In this paper we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a pre-specified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial...
2015: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/25089070/an-empirical-bayes-method-for-multivariate-meta-analysis-with-an-application-in-clinical-trials
#17
Yong Chen, Sheng Luo, Haitao Chu, Xiao Su, Lei Nie
We propose an empirical Bayes method for evaluating overall and study-specific treatment effects in multivariate meta-analysis with binary outcome. Instead of modeling transformed proportions or risks via commonly used multivariate general or generalized linear models, we directly model the risks without any transformation. The exact posterior distribution of the study-specific relative risk is derived. The hyperparameters in the posterior distribution can be inferred through an empirical Bayes procedure. As our method does not rely on the choice of transformation, it provides a flexible alternative to the existing methods and in addition, the correlation parameter can be intuitively interpreted as the correlation coefficient between risks...
July 29, 2014: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/26997746/a-smooth-bootstrap-procedure-towards-deriving-confidence-intervals-for-the-relative-risk
#18
Dongliang Wang, Alan D Hutson
Given a pair of sample estimators of two independent proportions, bootstrap methods are a common strategy towards deriving the associated confidence interval for the relative risk. We develop a new smooth bootstrap procedure, which generates pseudo-samples from a continuous quantile function. Under a variety of settings, our simulation studies show that our method possesses a better or equal performance in comparison with asymptotic theory based and existing bootstrap methods, particularly for heavily unbalanced data in terms of coverage probability and power...
2014: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/24465081/nonparametric-comparison-for-multivariate-panel-count-data
#19
Hui Zhao, Kate Virkler, Jianguo Sun
Multivariate panel count data often occur when there exist several related recurrent events or response variables defined by occurrences of related events. For univariate panel count data, several nonparametric treatment comparison procedures have been developed. However, it does not seem to exist a nonparametric procedure for multivariate cases. Based on differences between estimated mean functions, this paper proposes a class of nonparametric test procedures for multivariate panel count data. The asymptotic distribution of the new test statistics is established and a simulation study is conducted...
2014: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/24465080/non-homogeneous-poisson-process-model-for-genetic-crossover-interference
#20
Szu-Yun Leu, Pranab K Sen
The genetic crossover interference is usually modeled with a stationary renewal process to construct the genetic map. We propose two non-homogeneous, also dependent, Poisson process models applied to the known physical map. The crossover process is assumed to start from an origin and to occur sequentially along the chromosome. The increment rate depends on the position of the markers and the number of crossover events occurring between the origin and the markers. We show how to obtain parameter estimates for the process and use simulation studies and real Drosophila data to examine the performance of the proposed models...
2014: Communications in Statistics: Theory and Methods
journal
journal
42453
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"