Read by QxMD icon Read

Lifetime Data Analysis

Yanqin Feng, Yurong Chen
This paper discusses regression analysis of current status failure time data with information observations and continuous auxiliary covariates. Under the additive hazards model, we employ a frailty model to describe the relationship between the failure time of interest and censoring time through some latent variables and propose an estimated partial likelihood estimator of regression parameters that makes use of the available auxiliary information. Asymptotic properties of the resulting estimators are established...
January 5, 2017: Lifetime Data Analysis
Hyokyoung G Hong, Jian Kang, Yi Li
Identifying important biomarkers that are predictive for cancer patients' prognosis is key in gaining better insights into the biological influences on the disease and has become a critical component of precision medicine. The emergence of large-scale biomedical survival studies, which typically involve excessive number of biomarkers, has brought high demand in designing efficient screening tools for selecting predictive biomarkers. The vast amount of biomarkers defies any existing variable selection methods via regularization...
December 8, 2016: Lifetime Data Analysis
J F Lawless
Two- or multi-phase study designs are often used in settings involving failure times. In most studies, whether or not certain covariates are measured on an individual depends on their failure time and status. For example, when failures are rare, case-cohort or case-control designs are used to increase the number of failures relative to a random sample of the same size. Another scenario is where certain covariates are expensive to measure, so they are obtained only for selected individuals in a cohort. This paper considers such situations and focuses on cases where we wish to test hypotheses of no association between failure time and expensive covariates...
November 29, 2016: Lifetime Data Analysis
Ling Chen, Yanqin Feng, Jianguo Sun
This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models...
October 19, 2016: Lifetime Data Analysis
Michal Juraska, Peter B Gilbert
An objective of randomized placebo-controlled preventive HIV vaccine efficacy (VE) trials is to assess the relationship between vaccine effects to prevent HIV acquisition and continuous genetic distances of the exposing HIVs to multiple HIV strains represented in the vaccine. The set of genetic distances, only observed in failures, is collectively termed the 'mark.' The objective has motivated a recent study of a multivariate mark-specific hazard ratio model in the competing risks failure time analysis framework...
October 2016: Lifetime Data Analysis
Torben Martinussen, Klaus K Holst, Thomas H Scheike
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard function with the idea being to profile out this function before carrying out the estimation of the parameter of interest. In this step one uses a Breslow type estimator to estimate the cumulative baseline hazard function...
October 2016: Lifetime Data Analysis
Nubyra Ahmed, Sundarraman Subramanian
In the analysis of censored survival data, simultaneous confidence bands are useful devices to help determine the efficacy of a treatment over a control. Semiparametric confidence bands are developed for the difference of two survival curves using empirical likelihood and compared with the nonparametric counterpart. Simulation studies are presented to show that the proposed semiparametric approach is superior, with the new confidence bands giving empirical coverage closer to the nominal level. Further comparisons reveal that the semiparametric confidence bands are tighter and, hence, more informative...
October 2016: Lifetime Data Analysis
Olli Saarela
Case-base sampling provides an alternative to risk set sampling based methods to estimate hazard regression models, in particular when absolute hazards are also of interest in addition to hazard ratios. The case-base sampling approach results in a likelihood expression of the logistic regression form, but instead of categorized time, such an expression is obtained through sampling of a discrete set of person-time coordinates from all follow-up data. In this paper, in the context of a time-dependent exposure such as vaccination, and a potentially recurrent adverse event outcome, we show that the resulting partial likelihood for the outcome event intensity has the asymptotic properties of a likelihood...
October 2016: Lifetime Data Analysis
Seung-Hwan Lee
In the accelerated hazards regression model with censored data, estimation of the covariance matrices of the regression parameters is difficult, since it involves the unknown baseline hazard function and its derivative. This paper provides simple but reliable procedures that yield asymptotically normal estimators whose covariance matrices can be easily estimated. A class of weight functions are introduced to result in the estimators whose asymptotic covariance matrices do not involve the derivative of the unknown hazard function...
October 2016: Lifetime Data Analysis
Xiaochao Xia, Binyan Jiang, Jialiang Li, Wenyang Zhang
High-throughput profiling is now common in biomedical research. In this paper we consider the layout of an etiology study composed of a failure time response, and gene expression measurements. In current practice, a widely adopted approach is to select genes according to a preliminary marginal screening and a follow-up penalized regression for model building. Confounders, including for example clinical risk factors and environmental exposures, usually exist and need to be properly accounted for. We propose covariate-adjusted screening and variable selection procedures under the accelerated failure time model...
October 2016: Lifetime Data Analysis
Sedigheh Mirzaei Salehabadi, Debasis Sengupta
In a cross-sectional observational study, time-to-event distribution can be estimated from data on current status or from recalled data on the time of occurrence. In either case, one can treat the data as having been interval censored, and use the nonparametric maximum likelihood estimator proposed by Turnbull (J R Stat Soc Ser B 38:290-295, 1976). However, the chance of recall may depend on the time span between the occurrence of the event and the time of interview. In such a case, the underlying censoring would be informative, rendering the Turnbull estimator inappropriate...
October 2016: Lifetime Data Analysis
Yeqian Liu, Tao Hu, Jianguo Sun
This paper discusses regression analysis of current status data, a type of failure time data where each study subject is observed only once, in the presence of dependent censoring. Furthermore, there may exist a cured subgroup, meaning that a proportion of study subjects are not susceptible to the failure event of interest. For the problem, we develop a sieve maximum likelihood estimation approach with the use of latent variables and Bernstein polynomials. For the determination of the proposed estimators, an EM algorithm is developed and the asymptotic properties of the estimators are established...
September 30, 2016: Lifetime Data Analysis
Ross L Prentice, Shanshan Zhao
The Dabrowska (Ann Stat 16:1475-1489, 1988) product integral representation of the multivariate survivor function is extended, leading to a nonparametric survivor function estimator for an arbitrary number of failure time variates that has a simple recursive formula for its calculation. Empirical process methods are used to sketch proofs for this estimator's strong consistency and weak convergence properties. Summary measures of pairwise and higher-order dependencies are also defined and nonparametrically estimated...
September 27, 2016: Lifetime Data Analysis
Zhong Guan, Jing Qin
Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified...
September 19, 2016: Lifetime Data Analysis
Yuxue Jin, Tze Leung Lai
An approximate likelihood approach is developed for regression analysis of censored competing-risks data. This approach models directly the cumulative incidence function, instead of the cause-specific hazard function, in terms of explanatory covariates under a proportional subdistribution hazards assumption. It uses a self-consistent iterative procedure to maximize an approximate semiparametric likelihood function, leading to an asymptotically normal and efficient estimator of the vector of regression parameters...
August 8, 2016: Lifetime Data Analysis
Xiaofei Bai, Anastasios A Tsiatis, Wenbin Lu, Rui Song
A treatment regime at a single decision point is a rule that assigns a treatment, among the available options, to a patient based on the patient's baseline characteristics. The value of a treatment regime is the average outcome of a population of patients if they were all treated in accordance to the treatment regime, where large values are desirable. The optimal treatment regime is a regime which results in the greatest value. Typically, the optimal treatment regime is estimated by positing a regression relationship for the outcome of interest as a function of treatment and baseline characteristics...
August 1, 2016: Lifetime Data Analysis
Cheng Zheng, Xiao-Hua Zhou
Mediation analysis is an important topic as it helps researchers to understand why an intervention works. Most previous mediation analyses define effects in the mean scale and require a binary or continuous outcome. Recently, possible ways to define direct and indirect effects for causal mediation analysis with survival outcome were proposed. However, these methods mainly rely on the assumption of sequential ignorability, which implies no unmeasured confounding. To handle the potential confounding between the mediator and the outcome, in this article, we proposed a structural additive hazard model for mediation analysis with failure time outcome and derived estimators for controlled direct effects and controlled mediator effects...
July 27, 2016: Lifetime Data Analysis
Jie Zhou, Haixiang Zhang, Liuquan Sun, Jianguo Sun
Panel count data occur in many clinical and observational studies, and in many situations, the observation process may be informative and also there may exist a terminal event such as death which stops the follow-up. In this article, we propose a new joint model for the analysis of panel count data in the presence of both an informative observation process and a dependent terminal event via two latent variables. For the inference on the proposed models, a class of estimating equations is developed and the resulting estimators are shown to be consistent and asymptotically normal...
July 23, 2016: Lifetime Data Analysis
Kwun Chuen Gary Chan
Vardi's Expectation-Maximization (EM) algorithm is frequently used for computing the nonparametric maximum likelihood estimator of length-biased right-censored data, which does not admit a closed-form representation. The EM algorithm may converge slowly, particularly for heavily censored data. We studied two algorithms for accelerating the convergence of the EM algorithm, based on iterative convex minorant and Aitken's delta squared process. Numerical simulations demonstrate that the acceleration algorithms converge more rapidly than the EM algorithm in terms of number of iterations and actual timing...
July 7, 2016: Lifetime Data Analysis
Hoora Moradian, Denis Larocque, Fran├žois Bellavance
The log-rank test is used as the split function in many commonly used survival trees and forests algorithms. However, the log-rank test may have a significant loss of power in some circumstances, especially when the hazard functions or when the survival functions cross each other in the two compared groups. We investigate the use of the integrated absolute difference between the two children nodes survival functions as the splitting rule. Simulations studies and applications to real data sets show that forests built with this rule produce very good results in general, and that they are often better compared to forests built with the log-rank splitting rule...
July 5, 2016: Lifetime Data Analysis
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"