Read by QxMD icon Read

Lifetime Data Analysis

Ying Wu, Richard J Cook
With the increasing availability of large prospective disease registries, scientists studying the course of chronic conditions often have access to multiple data sources, with each source generated based on its own entry conditions. The different entry conditions of the various registries may be explicitly based on the response process of interest, in which case the statistical analysis must recognize the unique truncation schemes. Moreover, intermittent assessment of individuals in the registries can lead to interval-censored times of interest...
February 18, 2017: Lifetime Data Analysis
Chyong-Mei Chen, Pao-Sheng Shen
Left-truncated data often arise in epidemiology and individual follow-up studies due to a biased sampling plan since subjects with shorter survival times tend to be excluded from the sample. Moreover, the survival time of recruited subjects are often subject to right censoring. In this article, a general class of semiparametric transformation models that include proportional hazards model and proportional odds model as special cases is studied for the analysis of left-truncated and right-censored data. We propose a conditional likelihood approach and develop the conditional maximum likelihood estimators (cMLE) for the regression parameters and cumulative hazard function of these models...
February 6, 2017: Lifetime Data Analysis
Kayoung Park, Peihua Qiu
Medical treatments often take a period of time to reveal their impact on subjects, which is the so-called time-lag effect in the literature. In the survival data analysis literature, most existing methods compare two treatments in the entire study period. In cases when there is a substantial time-lag effect, these methods would not be effective in detecting the difference between the two treatments, because the similarity between the treatments during the time-lag period would diminish their effectiveness. In this paper, we develop a novel modeling approach for estimating the time-lag period and for comparing the two treatments properly after the time-lag effect is accommodated...
January 28, 2017: Lifetime Data Analysis
Mei-Cheng Wang, Chiung-Yu Huang
No abstract text is available yet for this article.
January 16, 2017: Lifetime Data Analysis
Yanqin Feng, Yurong Chen
This paper discusses regression analysis of current status failure time data with information observations and continuous auxiliary covariates. Under the additive hazards model, we employ a frailty model to describe the relationship between the failure time of interest and censoring time through some latent variables and propose an estimated partial likelihood estimator of regression parameters that makes use of the available auxiliary information. Asymptotic properties of the resulting estimators are established...
January 5, 2017: Lifetime Data Analysis
Hyokyoung G Hong, Jian Kang, Yi Li
Identifying important biomarkers that are predictive for cancer patients' prognosis is key in gaining better insights into the biological influences on the disease and has become a critical component of precision medicine. The emergence of large-scale biomedical survival studies, which typically involve excessive number of biomarkers, has brought high demand in designing efficient screening tools for selecting predictive biomarkers. The vast amount of biomarkers defies any existing variable selection methods via regularization...
December 8, 2016: Lifetime Data Analysis
J F Lawless
Two- or multi-phase study designs are often used in settings involving failure times. In most studies, whether or not certain covariates are measured on an individual depends on their failure time and status. For example, when failures are rare, case-cohort or case-control designs are used to increase the number of failures relative to a random sample of the same size. Another scenario is where certain covariates are expensive to measure, so they are obtained only for selected individuals in a cohort. This paper considers such situations and focuses on cases where we wish to test hypotheses of no association between failure time and expensive covariates...
November 29, 2016: Lifetime Data Analysis
Kwun Chuen Gary Chan
Vardi's Expectation-Maximization (EM) algorithm is frequently used for computing the nonparametric maximum likelihood estimator of length-biased right-censored data, which does not admit a closed-form representation. The EM algorithm may converge slowly, particularly for heavily censored data. We studied two algorithms for accelerating the convergence of the EM algorithm, based on iterative convex minorant and Aitken's delta squared process. Numerical simulations demonstrate that the acceleration algorithms converge more rapidly than the EM algorithm in terms of number of iterations and actual timing...
January 2017: Lifetime Data Analysis
Yu Shen, Jing Ning, Jing Qin
For the past several decades, nonparametric and semiparametric modeling for conventional right-censored survival data has been investigated intensively under a noninformative censoring mechanism. However, these methods may not be applicable for analyzing right-censored survival data that arise from prevalent cohorts when the failure times are subject to length-biased sampling. This review article is intended to provide a summary of some newly developed methods as well as established methods for analyzing length-biased data...
January 2017: Lifetime Data Analysis
Rong Fu, Peter B Gilbert
A common objective of cohort studies and clinical trials is to assess time-varying longitudinal continuous biomarkers as correlates of the instantaneous hazard of a study endpoint. We consider the setting where the biomarkers are measured in a designed sub-sample (i.e., case-cohort or two-phase sampling design), as is normative for prevention trials. We address this problem via joint models, with underlying biomarker trajectories characterized by a random effects model and their relationship with instantaneous risk characterized by a Cox model...
January 2017: Lifetime Data Analysis
Jieli Ding, Tsui-Shan Lu, Jianwen Cai, Haibo Zhou
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency...
January 2017: Lifetime Data Analysis
Ling Chen, Yanqin Feng, Jianguo Sun
This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models...
October 19, 2016: Lifetime Data Analysis
Michal Juraska, Peter B Gilbert
An objective of randomized placebo-controlled preventive HIV vaccine efficacy (VE) trials is to assess the relationship between vaccine effects to prevent HIV acquisition and continuous genetic distances of the exposing HIVs to multiple HIV strains represented in the vaccine. The set of genetic distances, only observed in failures, is collectively termed the 'mark.' The objective has motivated a recent study of a multivariate mark-specific hazard ratio model in the competing risks failure time analysis framework...
October 2016: Lifetime Data Analysis
Torben Martinussen, Klaus K Holst, Thomas H Scheike
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard function with the idea being to profile out this function before carrying out the estimation of the parameter of interest. In this step one uses a Breslow type estimator to estimate the cumulative baseline hazard function...
October 2016: Lifetime Data Analysis
Nubyra Ahmed, Sundarraman Subramanian
In the analysis of censored survival data, simultaneous confidence bands are useful devices to help determine the efficacy of a treatment over a control. Semiparametric confidence bands are developed for the difference of two survival curves using empirical likelihood and compared with the nonparametric counterpart. Simulation studies are presented to show that the proposed semiparametric approach is superior, with the new confidence bands giving empirical coverage closer to the nominal level. Further comparisons reveal that the semiparametric confidence bands are tighter and, hence, more informative...
October 2016: Lifetime Data Analysis
Olli Saarela
Case-base sampling provides an alternative to risk set sampling based methods to estimate hazard regression models, in particular when absolute hazards are also of interest in addition to hazard ratios. The case-base sampling approach results in a likelihood expression of the logistic regression form, but instead of categorized time, such an expression is obtained through sampling of a discrete set of person-time coordinates from all follow-up data. In this paper, in the context of a time-dependent exposure such as vaccination, and a potentially recurrent adverse event outcome, we show that the resulting partial likelihood for the outcome event intensity has the asymptotic properties of a likelihood...
October 2016: Lifetime Data Analysis
Seung-Hwan Lee
In the accelerated hazards regression model with censored data, estimation of the covariance matrices of the regression parameters is difficult, since it involves the unknown baseline hazard function and its derivative. This paper provides simple but reliable procedures that yield asymptotically normal estimators whose covariance matrices can be easily estimated. A class of weight functions are introduced to result in the estimators whose asymptotic covariance matrices do not involve the derivative of the unknown hazard function...
October 2016: Lifetime Data Analysis
Xiaochao Xia, Binyan Jiang, Jialiang Li, Wenyang Zhang
High-throughput profiling is now common in biomedical research. In this paper we consider the layout of an etiology study composed of a failure time response, and gene expression measurements. In current practice, a widely adopted approach is to select genes according to a preliminary marginal screening and a follow-up penalized regression for model building. Confounders, including for example clinical risk factors and environmental exposures, usually exist and need to be properly accounted for. We propose covariate-adjusted screening and variable selection procedures under the accelerated failure time model...
October 2016: Lifetime Data Analysis
Sedigheh Mirzaei Salehabadi, Debasis Sengupta
In a cross-sectional observational study, time-to-event distribution can be estimated from data on current status or from recalled data on the time of occurrence. In either case, one can treat the data as having been interval censored, and use the nonparametric maximum likelihood estimator proposed by Turnbull (J R Stat Soc Ser B 38:290-295, 1976). However, the chance of recall may depend on the time span between the occurrence of the event and the time of interview. In such a case, the underlying censoring would be informative, rendering the Turnbull estimator inappropriate...
October 2016: Lifetime Data Analysis
Yeqian Liu, Tao Hu, Jianguo Sun
This paper discusses regression analysis of current status data, a type of failure time data where each study subject is observed only once, in the presence of dependent censoring. Furthermore, there may exist a cured subgroup, meaning that a proportion of study subjects are not susceptible to the failure event of interest. For the problem, we develop a sieve maximum likelihood estimation approach with the use of latent variables and Bernstein polynomials. For the determination of the proposed estimators, an EM algorithm is developed and the asymptotic properties of the estimators are established...
September 30, 2016: Lifetime Data Analysis
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"