8:30am - 8:50amA “Double Copula” Model for Semi-Competing Risks Data
Antoniya Dineva1, Oliver Kuss2, Annika Hoyer1
1Biostatistics and Medical Biometry, Medical School OWL, Bielefeld University; 2Institute for Biometrics and Epidemiology, German Diabetes Center, Düsseldorf
Semi-competing risks describe the general setting in which the primary focus is modeling the age at a non-terminal event (e.g., disease occurrence) when the investigated subjects are also at risk for experiencing a terminal event (e.g., death). That is, the terminal event might censor the non-terminal event, but not vice versa. As a result, the observed ages at the two events within the same individual are correlated. The statistical approaches for this setting can be divided in at least two general groups: 1) copula-based models and 2) illness-death models.
A common approach in the first group involves modeling the dependency between the two events by a bivariate copula with two marginal distributions for age at disease onset and age at death. However, such approaches are limited in their ability to distinguish between the two different modes of mortality, the one with and the one without the disease.
We propose a “double copula” model, that estimates the three marginal distributions in the semi-competing risks framework: 1) age at disease onset, 2) age at death for individuals with the disease and 3) age at death for individuals without the disease. The model is defined based on two bivariate Clayton copulas modeling the joint distribution of age at disease onset with each of the two ages at death separately. We assume the marginals of the two mortalities to be Gompertz distributed, whereas for the disease onset a Weibull distribution is adopted. Model parameters are estimated by maximum likelihood, accounting for the complex censoring and truncation mechanisms in a cohort study. The likelihood function incorporates left truncation for both terminal and non-terminal events, reflecting delayed entry into the study. As in cohort studies the exact age at disease onset is only known to lie between two intermittent follow-up visits, we additionally consider interval censoring for the age of disease occurrence. We performed a simulation study, that demonstrated promising results in terms of accuracy and numerical robustness.
The model is illustrated using data from the Paquid study, a large cohort study on mental and physical aging with age at dementia onset describing the non-terminal event. Our model leads to plausible results, indicating that people with dementia diagnosis die earlier compared with people without dementia.
8:50am - 9:10amIncorporation of a mixture distribution on frailty regression model for clustered survival data
Gilbert Kiprotich
Ludwig-Maximilians-Universität München, Germany
In frailty models, the time-dependent correlation among observations is driven by the frailty distribution, which shapes how this dependence evolves. This paper introduces a new approach where the frailty is modeled using mixture distributions. The parameterization of the mixture distribution directly defines the mixing weights, and the model's closed-form Laplace transform allows for the calculation of Kendall's tau, a measure of dependence. We explore both parametric and semiparametric versions of this frailty model. Additionally, we present a hierarchical representation of the mixture, which simplifies the application of the expectation-maximization (EM) algorithm for parameter estimation. The effectiveness of the proposed model is demonstrated through Monte Carlo simulations and in an application to a cancer dataset. A comparative analysis with existing frailty models shows the strengths of our approach. The implementation of our methodology is available in the R package texttt{extrafrail}.
9:10am - 9:30amComparing a time-to-event endpoint in a two-arm trial investigating personalized treatment
Marilena Müller
DKFZ Heidelberg, Germany
We consider a two-arm randomized clinical trial in precision oncology with time-to-event endpoint. The control arm consists of standard of care (SOC) whereas patients in the treatment arm are offered personalized treatment. However, some patients in the treatment arm do not receive personalized targeted treatment as it is not available or patients do not consent to it. Instead they also receive SOC.
Intention-to-treat analysis involves comparing the outcomes of patients receiving a mixture of treatment and SOC to patients receiving solely SOC. It is proposed to divide the patients into groups based on whether they receive their intended treatment or not. In this mixture model, patients’ progression is assumed to follow different conditional intensity in either group.
Specifically, we model the conditional intensity functions parametrically. A regression model is presented, estimation of the conditionally intensity for the Cox model with proportional hazards is enabled and various testing procedures are discussed. Maximum Likelihood estimation on the partial likelihoods of the components is used representing individual patients and both arms simultaneously. Counting process theory as well as martingale theory is used to develop suitable test statistics for various settings of interest. We conduct an in-depth simulation study to complement the theoretical results.
We propose guidelines on how to account for presence of mixtures as well as giving insight on when it is necessary or appropriate to apply a more rigorous model.
9:30am - 9:50amR-package discSurv: A Toolbox for Discrete Time Survival Analysis
Thomas Welchowski1,2, Moritz Berger1, David Köhler1, Matthias Schmid1
1Institute for Medical Biometry, Informatics and Epidemiology, University of Bonn; 2Psychological Methods, Evaluation and Statistics, Department of Psychology, University of Zurich
In discrete survival analysis one considers time-to-event responses, in which time is measured on an ordinal discrete time scale. This includes situations, where the outcome values are intrinsically discrete or a grouped version of underlying continuous event times. That framework allows easier interpretation of hazard rates, is able to handle ties and can be embedded with appropriate data preprocessing into a generalized additive model framework, which allows to take advantage of existing software (Tutz and Schmid, 2016). In this talk we give an overview of the R add-on package (Welchowski et al., 2024), which is designed as a toolbox to support all phases of applied discrete time analysis with conventient help functions. Specifically, the package supplies preparatory functions for appropriate re-shaping of data and fitting regression models with censored single-event, competing risks as well as subdistribution hazards (Berger et al., 2020). In addition, it includes methods for assessing calibration, discrimination (Schmid et al., 2018) and goodness of fit of discrete survival models.
|