journal
MENU ▼
Read by QxMD icon Read
search

Communications in Statistics: Theory and Methods

journal
https://www.readbyqxmd.com/read/27840548/t-type-corrected-loss-estimation-for-error-in-variable-model
#1
Jiao Jin, Liang Zhu, Xingwei Tong, Kirsten K Ness
In this paper, we consider a linear model in which the covariates are measured with errors. We propose a t-type corrected-loss estimation of the covariate effect, when the measurement error follows the Laplace distribution. The proposed estimator is asymptotically normal. In practical studies, some outliers that diminish the robustness of the estimation occur. Simulation studies show that the estimators are resistent to vertical outliers and an application of Six-Minute Walk test is presented to show that the proposed method performs well...
2017: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/27293307/systematically-misclassified-binary-dependent-variables
#2
Vidhura Tennekoon, Robert Rosenman
When a binary dependent variable is misclassified, that is, recorded in the category other than where it really belongs, probit and logit estimates are biased and inconsistent. In some cases the probability of misclassification may vary systematically with covariates, and thus be endogenous. In this paper we develop an estimation approach that corrects for endogenous misclassification, validate our approach using a simulation study, and apply it to the analysis of a treatment program designed to improve family dynamics...
2016: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/26924882/inversion-theorem-based-kernel-density-estimation-for-the-ordinary-least-squares-estimator-of-a-regression-coefficient
#3
Dongliang Wang, Alan D Hutson
The traditional confidence interval associated with the ordinary least squares estimator of linear regression coefficient is sensitive to non-normality of the underlying distribution. In this article, we develop a novel kernel density estimator for the ordinary least squares estimator via utilizing well-defined inversion based kernel smoothing techniques in order to estimate the conditional probability density distribution of the dependent random variable. Simulation results show that given a small sample size, our method significantly increases the power as compared with Wald-type CIs...
2015: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/26924881/an-investigation-of-quantile-function-estimators-relative-to-quantile-confidence-interval-coverage
#4
Lai Wei, Dongliang Wang, Alan D Hutson
In this article, we investigate the limitations of traditional quantile function estimators and introduce a new class of quantile function estimators, namely, the semi-parametric tail-extrapolated quantile estimators, which has excellent performance for estimating the extreme tails with finite sample sizes. The smoothed bootstrap and direct density estimation via the characteristic function methods are developed for the estimation of confidence intervals. Through a comprehensive simulation study to compare the confidence interval estimations of various quantile estimators, we discuss the preferred quantile estimator in conjunction with the confidence interval estimation method to use under different circumstances...
2015: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/26023251/moving-block-bootstrap-for-analyzing-longitudinal-data
#5
Hyunsu Ju
In a longitudinal study subjects are followed over time. I focus on a case where the number of replications over time is large relative to the number of subjects in the study. I investigate the use of moving block bootstrap methods for analyzing such data. Asymptotic properties of the bootstrap methods in this setting are derived. The effectiveness of these resampling methods is also demonstrated through a simulation study.
2015: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/25530661/sample-size-requirements-and-study-duration-for-testing-main-effects-and-interactions-in-completely-randomized-factorial-designs-when-time-to-event-is-the-outcome
#6
Barry Kurt Moser, Susan Halabi
In this paper we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a pre-specified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial...
2015: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/25089070/an-empirical-bayes-method-for-multivariate-meta-analysis-with-an-application-in-clinical-trials
#7
Yong Chen, Sheng Luo, Haitao Chu, Xiao Su, Lei Nie
We propose an empirical Bayes method for evaluating overall and study-specific treatment effects in multivariate meta-analysis with binary outcome. Instead of modeling transformed proportions or risks via commonly used multivariate general or generalized linear models, we directly model the risks without any transformation. The exact posterior distribution of the study-specific relative risk is derived. The hyperparameters in the posterior distribution can be inferred through an empirical Bayes procedure. As our method does not rely on the choice of transformation, it provides a flexible alternative to the existing methods and in addition, the correlation parameter can be intuitively interpreted as the correlation coefficient between risks...
July 29, 2014: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/26997746/a-smooth-bootstrap-procedure-towards-deriving-confidence-intervals-for-the-relative-risk
#8
Dongliang Wang, Alan D Hutson
Given a pair of sample estimators of two independent proportions, bootstrap methods are a common strategy towards deriving the associated confidence interval for the relative risk. We develop a new smooth bootstrap procedure, which generates pseudo-samples from a continuous quantile function. Under a variety of settings, our simulation studies show that our method possesses a better or equal performance in comparison with asymptotic theory based and existing bootstrap methods, particularly for heavily unbalanced data in terms of coverage probability and power...
2014: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/24465081/nonparametric-comparison-for-multivariate-panel-count-data
#9
Hui Zhao, Kate Virkler, Jianguo Sun
Multivariate panel count data often occur when there exist several related recurrent events or response variables defined by occurrences of related events. For univariate panel count data, several nonparametric treatment comparison procedures have been developed. However, it does not seem to exist a nonparametric procedure for multivariate cases. Based on differences between estimated mean functions, this paper proposes a class of nonparametric test procedures for multivariate panel count data. The asymptotic distribution of the new test statistics is established and a simulation study is conducted...
2014: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/24465080/non-homogeneous-poisson-process-model-for-genetic-crossover-interference
#10
Szu-Yun Leu, Pranab K Sen
The genetic crossover interference is usually modeled with a stationary renewal process to construct the genetic map. We propose two non-homogeneous, also dependent, Poisson process models applied to the known physical map. The crossover process is assumed to start from an origin and to occur sequentially along the chromosome. The increment rate depends on the position of the markers and the number of crossover events occurring between the origin and the markers. We show how to obtain parameter estimates for the process and use simulation studies and real Drosophila data to examine the performance of the proposed models...
2014: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/24347808/evaluation-of-a-frequentist-hierarchical-model-to-estimate-prevalence-when-sampling-from-a-large-geographic-area-using-pool-screening
#11
Thomas Birkner, Inmaculada B Aban, Charles R Katholi
We present a frequentist Bernoulli-Beta hierarchical model to relax the constant prevalence assumption underlying the traditional prevalence estimation approach based on pooled data. This assumption is called into question when sampling from a large geographic area. Pool screening is a method that combines individual items into pools. Each pool will either test positive (at least one of the items is positive) or negative (all items are negative). Pool screening is commonly applied to the study of tropical diseases where pools consist of vectors (e...
2013: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/23847392/spatial-cluster-detection-for-longitudinal-outcomes-using-administrative-regions
#12
Andrea J Cook, Diane R Gold, Yi Li
This manuscript proposes a new spatial cluster detection method for longitudinal outcomes that detects neighborhoods and regions with elevated rates of disease while controlling for individual level confounders. The proposed method, CumResPerm, utilizes cumulative geographic residuals through a permutation test to detect potential clusters which are are defined as sets of administrative regions, such as a town, or group of administrative regions. Previous cluster detection methods are not able to incorporate individual level data including covariate adjustment, while still being able to define potential clusters using informative neighborhood or town boundaries...
January 1, 2013: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/23750070/random-effects-coefficient-of-determination-for-mixed-and-meta-analysis-models
#13
Eugene Demidenko, James Sargent, Tracy Onega
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression...
January 1, 2012: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/23543815/the-distribution-of-family-sizes-under-a-time-homogeneous-birth-and-death-process
#14
Panagis Moschopoulos, Max Shpak
The number of extant individuals within a lineage, as exemplified by counts of species numbers across genera in a higher taxonomic category, is known to be a highly skewed distribution. Because the sublineages (such as genera in a clade) themselves follow a random birth process, deriving the distribution of lineage sizes involves averaging the solutions to a birth and death process over the distribution of time intervals separating the origin of the lineages. In this article, we show that the resulting distributions can be represented by hypergeometric functions of the second kind...
May 11, 2010: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/26085712/quantifying-the-impact-of-unobserved-heterogeneity-on-inference-from-the-logistic-model
#15
Salma Ayis
While consequences of unobserved heterogeneity such as biased estimates of binary response regression models are generally known; quantifying these and awareness of situations with more serious impact on inference is however, remarkably lacking. This study examines the effect of unobserved heterogeneity on estimates of the standard logistic model. An estimate of bias was derived for the maximum likelihood estimator βˆ, and simulated data was used to investigate a range of situations that influence size of bias due to unobserved heterogeneity...
August 2009: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/24363489/practical-methods-for-bounding-type-i-error-rate-with-an-internal-pilot-design
#16
Christopher S Coffey, John A Kairalla, Keith E Muller
New analytic forms for distributions at the heart of internal pilot theory solve many problems inherent to current techniques for linear models with Gaussian errors. Internal pilot designs use a fraction of the data to re-estimate the error variance and modify the final sample size. Too small or too large a sample size caused by an incorrect planning variance can be avoided. However, the usual hypothesis test may need adjustment to control the Type I error rate. A bounding test achieves control of Type I error rate while providing most of the advantages of the unadjusted test...
2007: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/24353369/on-the-expected-values-of-sequences-of-functions
#17
Deborah H Glueck, Keith E Muller
We prove new extensions to lemmas about combinations of convergent sequences of distribution functions and absolutely continuous bounded functions. New lemma one, a generalized Helly theorem, allows computing the limit of the expected value of a sequence of functions with respect to a sequence of measures. Previously published results allow either the function or the measure to be a sequence, but not both. Lemma two allows computing the expected value of an absolutely continuous monotone function by integrating the probabilities of the inverse function values...
January 1, 2001: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/24465079/properties-of-doubly-truncated-gamma-variables
#18
Christopher S Coffey, Keith E Muller
The truncated gamma distribution has been widely studied, primarily in life-testing and reliability settings. Most work has assumed an upper bound on the support of the random variable, i.e. the space of the distribution is (0, u). We consider a doubly-truncated gamma random variable restricted by both a lower (l) and upper (u) truncation point, both of which are considered known. We provide simple forms for the density, cumulative distribution function (CDF), moment generating function, cumulant generating function, characteristic function, and moments...
February 1, 2000: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/24307749/some-distributions-and-their-implications-for-an-internal-pilot-study-with-a-univariate-linear-model
#19
Christopher S Coffey, Keith E Muller
In planning a study, the choice of sample size may depend on a variance value based on speculation or obtained from an earlier study. Scientists may wish to use an internal pilot design to protect themselves against an incorrect choice of variance. Such a design involves collecting a portion of the originally planned sample and using it to produce a new variance estimate. This leads to a new power analysis and increasing or decreasing sample size. For any general linear univariate model, with fixed predictors and Gaussian errors, we prove that the uncorrected fixed sample F-statistic is the likelihood ratio test statistic...
January 2000: Communications in Statistics: Theory and Methods
https://www.readbyqxmd.com/read/24363488/bias-in-linear-model-power-and-sample-size-due-to-estimating-variance
#20
Keith E Muller, Virginia B Pasour
Planning a study using the General Linear Univariate Model often involves sample size calculation based on a variance estimated in an earlier study. Noncentrality, power, and sample size inherit the randomness. Additional complexity arises if the estimate has been censored. Left censoring occurs when only significant tests lead to a power calculation, while right censoring occurs when only non-significant tests lead to a power calculation. We provide simple expressions for straightforward computation of the distribution function, moments, and quantiles of the censored variance estimate, estimated noncentrality, power, and sample size...
1997: Communications in Statistics: Theory and Methods
journal
journal
42453
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"