journal
MENU ▼
Read by QxMD icon Read
search

Communications in Statistics: Theory and Methods

journal
https://www.readbyqxmd.com/read/29416225/nonparametric-manova-approaches-for-non-normal-multivariate-outcomes-with-missing-values
#1
Fanyin He, Sati Mazumdar, Gong Tang, Triptish Bhatia, Stewart J Anderson, Mary Amanda Dew, Robert Krafty, Vishwajit Nimgaonkar, Smita Deshpande, Martica Hall, Charles F Reynolds
Between-group comparisons often entail many correlated response variables. The multivariate linear model, with its assumption of multivariate normality, is the accepted standard tool for these tests. When this assumption is violated, the nonparametric multivariate Kruskal-Wallis (MKW) test is frequently used. However, this test requires complete cases with no missing values in response variables. Deletion of cases with missing values likely leads to inefficient statistical inference. Here we extend the MKW test to retain information from partially-observed cases...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/29326496/a-simple-method-for-deriving-the-confidence-regions-for-the-penalized-cox-s-model-via-the-minimand-perturbation
#2
Chen-Yen Lin, Susan Halabi
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/29081575/optimal-and-lead-in-adaptive-allocation-for-binary-outcomes-a-comparison-of-bayesian-methodologies
#3
Roy T Sabo, Ghalib Bello
We compare posterior and predictive estimators and probabilities in response-adaptive randomization designs for two- and three-group clinical trials with binary outcomes. Adaptation based upon posterior estimates are discussed, as are two predictive probability algorithms: one using the traditional definition, the other using a skeptical distribution. Optimal and natural lead-in designs are covered. Simulation studies show: efficacy comparisons lead to more adaptation than center comparisons, though at some power loss; skeptically predictive efficacy comparisons and natural lead-in approaches lead to less adaptation but offer reduced allocation variability...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/29081574/extensions-of-d-optimal-minimal-designs-for-symmetric-mixture-models
#4
Yanyan Li, Damaraju Raghavarao, Inna Chervoneva
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/28603337/sample-size-calculations-for-time-averaged-difference-of-longitudinal-binary-outcomes
#5
Ying Lou, Jing Cao, Song Zhang, Chul Ahn
In clinical trials with repeated measurements, the responses from each subject are measured multiple times during the study period. Two approaches have been widely used to assess the treatment effect, one that compares the rate of change between two groups and the other that tests the time-averaged difference (TAD). While sample size calculations based on comparing the rate of change between two groups have been reported by many investigators, the literature has paid relatively little attention to the sample size estimation for time-averaged difference (TAD) in the presence of heterogeneous correlation structure and missing data in repeated measurement studies...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/28435181/orthogonality-of-the-mean-and-error-distribution-in-generalized-linear-models
#6
Alan Huang, Paul J Rathouz
We show that the mean-model parameter is always orthogonal to the error distribution in generalized linear models. Thus, the maximum likelihood estimator of the mean-model parameter will be asymptotically efficient regardless of whether the error distribution is known completely, known up to a finite vector of parameters, or left completely unspecified, in which case the likelihood is taken to be an appropriate semiparametric likelihood. Moreover, the maximum likelihood estimator of the mean-model parameter will be asymptotically independent of the maximum likelihood estimator of the error distribution...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/28008212/consistent-model-identification-of-varying-coefficient-quantile-regression-with-bic-tuning-parameter-selection
#7
Qi Zheng, Limin Peng
Quantile regression provides a flexible platform for evaluating covariate effects on different segments of the conditional distribution of response. As the effects of covariates may change with quantile level, contemporaneously examining a spectrum of quantiles is expected to have a better capacity to identify variables with either partial or full effects on the response distribution, as compared to focusing on a single quantile. Under this motivation, we study a general adaptively weighted LASSO penalization strategy in the quantile regression setting, where a continuum of quantile index is considered and coefficients are allowed to vary with quantile index...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/27840548/t-type-corrected-loss-estimation-for-error-in-variable-model
#8
Jiao Jin, Liang Zhu, Xingwei Tong, Kirsten K Ness
In this paper, we consider a linear model in which the covariates are measured with errors. We propose a t-type corrected-loss estimation of the covariate effect, when the measurement error follows the Laplace distribution. The proposed estimator is asymptotically normal. In practical studies, some outliers that diminish the robustness of the estimation occur. Simulation studies show that the estimators are resistent to vertical outliers and an application of Six-Minute Walk test is presented to show that the proposed method performs well...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/27293307/systematically-misclassified-binary-dependent-variables
#9
Vidhura Tennekoon, Robert Rosenman
When a binary dependent variable is misclassified, that is, recorded in the category other than where it really belongs, probit and logit estimates are biased and inconsistent. In some cases the probability of misclassification may vary systematically with covariates, and thus be endogenous. In this paper we develop an estimation approach that corrects for endogenous misclassification, validate our approach using a simulation study, and apply it to the analysis of a treatment program designed to improve family dynamics...
2016: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/26924882/inversion-theorem-based-kernel-density-estimation-for-the-ordinary-least-squares-estimator-of-a-regression-coefficient
#10
Dongliang Wang, Alan D Hutson
The traditional confidence interval associated with the ordinary least squares estimator of linear regression coefficient is sensitive to non-normality of the underlying distribution. In this article, we develop a novel kernel density estimator for the ordinary least squares estimator via utilizing well-defined inversion based kernel smoothing techniques in order to estimate the conditional probability density distribution of the dependent random variable. Simulation results show that given a small sample size, our method significantly increases the power as compared with Wald-type CIs...
2015: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/26924881/an-investigation-of-quantile-function-estimators-relative-to-quantile-confidence-interval-coverage
#11
Lai Wei, Dongliang Wang, Alan D Hutson
In this article, we investigate the limitations of traditional quantile function estimators and introduce a new class of quantile function estimators, namely, the semi-parametric tail-extrapolated quantile estimators, which has excellent performance for estimating the extreme tails with finite sample sizes. The smoothed bootstrap and direct density estimation via the characteristic function methods are developed for the estimation of confidence intervals. Through a comprehensive simulation study to compare the confidence interval estimations of various quantile estimators, we discuss the preferred quantile estimator in conjunction with the confidence interval estimation method to use under different circumstances...
2015: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/26023251/moving-block-bootstrap-for-analyzing-longitudinal-data
#12
Hyunsu Ju
In a longitudinal study subjects are followed over time. I focus on a case where the number of replications over time is large relative to the number of subjects in the study. I investigate the use of moving block bootstrap methods for analyzing such data. Asymptotic properties of the bootstrap methods in this setting are derived. The effectiveness of these resampling methods is also demonstrated through a simulation study.
2015: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/25530661/sample-size-requirements-and-study-duration-for-testing-main-effects-and-interactions-in-completely-randomized-factorial-designs-when-time-to-event-is-the-outcome
#13
Barry Kurt Moser, Susan Halabi
In this paper we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a pre-specified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial...
2015: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/25089070/an-empirical-bayes-method-for-multivariate-meta-analysis-with-an-application-in-clinical-trials
#14
Yong Chen, Sheng Luo, Haitao Chu, Xiao Su, Lei Nie
We propose an empirical Bayes method for evaluating overall and study-specific treatment effects in multivariate meta-analysis with binary outcome. Instead of modeling transformed proportions or risks via commonly used multivariate general or generalized linear models, we directly model the risks without any transformation. The exact posterior distribution of the study-specific relative risk is derived. The hyperparameters in the posterior distribution can be inferred through an empirical Bayes procedure. As our method does not rely on the choice of transformation, it provides a flexible alternative to the existing methods and in addition, the correlation parameter can be intuitively interpreted as the correlation coefficient between risks...
July 29, 2014: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/26997746/a-smooth-bootstrap-procedure-towards-deriving-confidence-intervals-for-the-relative-risk
#15
Dongliang Wang, Alan D Hutson
Given a pair of sample estimators of two independent proportions, bootstrap methods are a common strategy towards deriving the associated confidence interval for the relative risk. We develop a new smooth bootstrap procedure, which generates pseudo-samples from a continuous quantile function. Under a variety of settings, our simulation studies show that our method possesses a better or equal performance in comparison with asymptotic theory based and existing bootstrap methods, particularly for heavily unbalanced data in terms of coverage probability and power...
2014: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/24465081/nonparametric-comparison-for-multivariate-panel-count-data
#16
Hui Zhao, Kate Virkler, Jianguo Sun
Multivariate panel count data often occur when there exist several related recurrent events or response variables defined by occurrences of related events. For univariate panel count data, several nonparametric treatment comparison procedures have been developed. However, it does not seem to exist a nonparametric procedure for multivariate cases. Based on differences between estimated mean functions, this paper proposes a class of nonparametric test procedures for multivariate panel count data. The asymptotic distribution of the new test statistics is established and a simulation study is conducted...
2014: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/24465080/non-homogeneous-poisson-process-model-for-genetic-crossover-interference
#17
Szu-Yun Leu, Pranab K Sen
The genetic crossover interference is usually modeled with a stationary renewal process to construct the genetic map. We propose two non-homogeneous, also dependent, Poisson process models applied to the known physical map. The crossover process is assumed to start from an origin and to occur sequentially along the chromosome. The increment rate depends on the position of the markers and the number of crossover events occurring between the origin and the markers. We show how to obtain parameter estimates for the process and use simulation studies and real Drosophila data to examine the performance of the proposed models...
2014: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/24347808/evaluation-of-a-frequentist-hierarchical-model-to-estimate-prevalence-when-sampling-from-a-large-geographic-area-using-pool-screening
#18
Thomas Birkner, Inmaculada B Aban, Charles R Katholi
We present a frequentist Bernoulli-Beta hierarchical model to relax the constant prevalence assumption underlying the traditional prevalence estimation approach based on pooled data. This assumption is called into question when sampling from a large geographic area. Pool screening is a method that combines individual items into pools. Each pool will either test positive (at least one of the items is positive) or negative (all items are negative). Pool screening is commonly applied to the study of tropical diseases where pools consist of vectors (e...
2013: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/23847392/spatial-cluster-detection-for-longitudinal-outcomes-using-administrative-regions
#19
Andrea J Cook, Diane R Gold, Yi Li
This manuscript proposes a new spatial cluster detection method for longitudinal outcomes that detects neighborhoods and regions with elevated rates of disease while controlling for individual level confounders. The proposed method, CumResPerm, utilizes cumulative geographic residuals through a permutation test to detect potential clusters which are are defined as sets of administrative regions, such as a town, or group of administrative regions. Previous cluster detection methods are not able to incorporate individual level data including covariate adjustment, while still being able to define potential clusters using informative neighborhood or town boundaries...
January 1, 2013: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/23750070/random-effects-coefficient-of-determination-for-mixed-and-meta-analysis-models
#20
Eugene Demidenko, James Sargent, Tracy Onega
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression...
January 1, 2012: Communications in Statistics: Theory and Methods
journal
journal
42453
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"