Survival analysis is widely used in evidence-based medicine to examine the time-to-event series. Often used for survival/death events, time-to-event series can illustrate time to any dichotomous event. Examples: the number of days before treatment allows an individual to go into remission; or the severity grade of disease and the hours in the hospital before being released. Using this method, one can compare patients and/or groups of patients. Relationships between more than one group can use Kaplan-Meier estimations and log-rank tests, which are both nonparametric. Researchers interested in quantifying the effect of the comparison(s) can use Cox proportional hazard models. Survival analysis techniques were first used for medical studies but have expanded to a variety of other disciplines including financial services and engineering.
Issues of Concern
Due to the frequency of utilization of survival analysis in medical literature, healthcare providers must understand common concepts and analyses associated with these techniques, including Kaplan-Meier, log-rank tests, and Cox proportional hazards models.
Survival analysis is used with a binary or dichotomous outcome of interest. Typically, multiple groups are compared, for example, a group may have treatment A, and another group may have treatment B. The survival function is the probability of surviving (or not experiencing an event) up to a specified time point, whereas hazard rate is the rate of occurrence for the event within a given period. To determine survival time, one must know the time of the original event at origin and the time of the final event. Truncation helps in determining a sample for survival analysis to limit bias; researchers should not choose subjects they know will likely (or not) experience an event in order to support their hypotheses.
Censoring is an important topic in survival analysis. Censored subjects never experienced the outcome of interest during the study specified timeframe. Whether or not an individual is censored should not be associated with whether or not the event occurred. Left censoring occurs when an individual experienced the event before origin; right censoring occurs when an individual did not experience the event during the specified timeframe (occurred after, or unsure if and when the incident occurred). Essentially, censoring means that the outcome of interest for that subject cannot be determined based on the study. Survival analysis examines the time before an individual or group experiences the event or outcome of interest or until censored. Ignoring or excluding censored subjects would bias the results. While some study facets remain out of the control of the researchers, recruiting subjects into the study that have characteristics suggesting better study retention can limit censoring.
Life tables (also known as actuarial life tables) differ from other methods of survival analysis in that the observation outputs categorize into distinct time events. Life tables present outputs that show whether the case occurred, whether the individual was censored, and the time to that event. When examining a life table, one can determine the proportion of patients experiencing an event (e.g., dying) during which interval the outcome occurred. Life tables can also denote which individuals were censored and which were not (experienced the event in the timeframe). Within these life tables, one should assume no changes in experiencing an event at different intervals. For example, if patients with breast cancer received a second drug halfway through the treatment, they may be less likely to experience the event of dying, thus potentially invalidating results. Life tables remain useful, but in 1958, Kaplan and Meier proposed a new method that removed the pre-fixed time interval requirement. With their method, the events/outcomes of interests are assessable as they happen regardless of specific time point intervals.
One of the most frequently used methods of survival analysis is the Kaplan-Meier (KM) approach. The KM method estimates the likelihood of survival. This approach often utilizes a KM survival curve to represent the survival function. Similar to life tables, the KM curves assume that those censored would have had similar outcomes, and those recruited later in the study have the same probabilities as those recruited earlier. The KM plots a survival curve, which often reports median survival time(s), a reliable estimate if the majority of the observations are uncensored. One can calculate confidence intervals (CIs) for KM probabilities and plot CIs in the survival curves to provide a range of possible values for the population based on the sample. The curve itself does not provide information on whether or not the difference between the groups is significant. To do this, one can use the log-rank test.
The log-rank test is a non-parametric test that compares two or more groups’ survival distributions. This test, which assumes censoring is unrelated to prognosis, examines the null hypothesis (that there are no differences in distribution between the groups). In other words, researchers can determine if curves between two different groups were statistically significant, i.e., if the event rate in one group is consistently higher than the other over time. Log-rank tests are utilized when a data set has censored observations; if one has no censored cases, the Wilcoxon rank-sum test can compare survival times. Researchers reporting log-rank tests should specify the entire distribution being tested, not a specific timeframe. The log-rank test itself is limited in the sense that it cannot determine an estimate of the difference between groups, whereas a Cox proportional hazard model can.
Cox Proportional Hazard Model
The Cox proportional hazards model is a semiparametric regression model that allows researchers to examine the effects of multiple variables on survival curves. Semiparametric means that the method does not require a specific distribution of the survival function; however, it does assume a relationship between the covariates and outcome. The output of the Cox model is presented in hazard ratios (HR).
Hazard, or the hazard function, refers to the chances of an event/outcome to occur within a unit of time, assuming that the subject has survived up to that time. The hazard ratio (HR) represents the ratio of the two different treatment groups. As the value above 1 increases, the risk increases for the event associated with that variable. As HR decreases below 1, the risk decreases. Cox proportional hazards model investigates relationships of predictors in these analyses and develops HRs.
An important assumption made by the Cox model is that the proportional hazards between the variables remain steady over time. Cox hazard models allow researchers to adjust for confounders and to form relative risk (RR) for individuals to experience an event based on risk factors.
Software is available to compute KM survival curves, log-rank tests, and hazard regressions. This article covers the three most commonly used survival analyses in the medical literature; however, others exist. By having a foundational understanding of the commonly used survival analyses addressed here, healthcare providers can properly assess survival analysis methods in literature to make evidence-based decisions in practice or their own clinical studies.