Which is the best method for measuring improvements in cancer care? Mortality rates versus survival rates

A new study examining the value of cancer care in the United States compared to Western Europe ignited a string of responses and discussion regarding the best way to measure the quality of cancer care.

The study by Samir Soneji and JaeWon Yang finds that US cancer mortality rates have decreased only modestly considering the sharp increases in spending on cancer treatment. The Health Affairs article, “New Analysis Reexamines The Value Of Cancer Care In The United States Compared To Western Europe”, discusses the findings of an earlier study citing the gains in breast cancer and prostate cancer survival rates in the U.S. when compared to certain European countries. The authors claim that the study did not account for stage of cancer at diagnosis and was subject to bias in the analysis.

Soneji and Yang found that “the number of deaths averted in the [U.S.] compared to all of Western Europe from 1982 to 2010 varied substantially by cancer type” and “the additional value derived from costlier US cancer care also varied considerably by cancer type.”

These new findings prompted a response from Tomas Philipson and Dana Goldman, the authors of the previous study. Philipson and Goldman point out that using mortality rates to measure the value of cancer care is flawed and a misleading indicator of health system performance. Mortality rates are confounded by factors like poverty, pollution, smoking, and diet, so they cannot accurately provide information on the impact of cancer treatment.

So, what does this all mean?

Mortality rates are the number of people who die of a certain disease (diagnosed or undiagnosed prior to death) divided by the total population. In addition to being significantly influenced by socioeconomic factors, some also argue that mortality rates do not describe the quality of cancer care because they include people who had not yet been diagnosed and were never treated, so the healthcare system was never involved. Others argue that mortality rates are accurate because they include all people and it can be seen as a failure of the healthcare system that people died without being diagnosed and never received care.

Aaron Carroll, writing for the New York Times, summarized the debate and provided his own view supporting mortality rates as the best measure for progress in cancer care.

“In reality, you can decrease the mortality rate only by preventing people with the disease from dying, or preventing them from getting it in the first place.”

Aaron E. Carroll, The Upshot, New York Times

Survival rates, Carroll notes, represent how many people live for a certain length of time after they are diagnosed. He writes that a diagnosed patient wants to know the survival rate once they are diagnosed. The argument against using survival rates as an indicator of the quality of cancer care is that some are diagnosed early, but the disease may not have actually presented itself regardless of care. This “lead time bias” can give the impression that people are living longer, even though they may have just been diagnosed earlier. Additionally, with diagnosis rates increasing, cancers that may never have developed into a serious illness can skew survival rates to appear more positive than they are.

“You can improve the survival rate, however, by preventing death, preventing people from getting sick, or making the diagnosis earlier. That last factor can make all the difference.”

Aaron E. Carroll, The Upshot, New York Times

In another Health Affairs blog, H. Gilbert Welch and Elliott Fisher also weigh in on the debate of mortality rates versus survival rates to measure the value of cancer care. Welch and Fisher also cite “lead time bias” and over-diagnosis as major flaws in cancer survival rates. While the debate continues, one thing that’s clear is that both survival rates and mortality rates for many cancers are improving.