Assessing the value of risk predictions by using risk stratification tables.
Abstract
The recent epidemiologic and clinical literature is filled with studies evaluating statistical models for predicting disease or some other adverse event. Risk stratification tables are a new way to evaluate the benefit of adding a new risk marker to a risk prediction model that includes an established set of markers. This approach involves cross-tabulating risk predictions from models with and without the new marker. In this article, the authors use examples to show how risk stratification tables can be used to compare 3 important measures of model performance between the models with and those without the new marker: the extent to which the risks calculated from the models reflect the actual fraction of persons in the population with events (calibration); the proportions in which the population is stratified into clinically relevant risk categories (stratification capacity); and the extent to which participants with events are assigned to high-risk categories and those without events are assigned to low-risk categories (classification accuracy). They detail common misinterpretations and misuses of the risk stratification method and conclude that the information that can be extracted from risk stratification tables is an enormous improvement over commonly reported measures of risk prediction model performance (for example, c-statistics and Hosmer-Lemeshow tests) because it describes the value of the models for guiding medical decisions.