Just how real are the annual ACS estimates for risk associated with prostate cancer?


One has to be a very sophisticated statistician to understand how the American Cancer Society (ACS) comes up with its annual estimates for the numbers of people who will get diagnosed with and die of each specific form of cancer in any particular year. (And your sitemaster freely admits that he is not a sophisticated statistician.)

For 2016, the ACS has announced their estimate (see Siegal et al.) that there will be

  • 180,890 new cases of men being diagnosed with prostate cancer
  • 26,120 men who actually die of prostate cancer

These are both the lowest annual numbers that your sitemaster can remember since the introduction of PSA testing in the late 1980s. But before we all start cheering, let’s look back at other ACS estimates.

When one looks back at the ACS’s projections for prior, relatively recent years they read as follows:

  • New cases of prostate cancer by year
    • 2005: 232,090
    • 2010: 217,730
    • 2011: 240,390
    • 2012: 241,740
    • 2013: 238,590
    • 2014: 233,000
    • 2015: 220,800
  • Prostate cancer-specific deaths by year
    • 2005: 30,350
    • 2010: 32,050
    • 2011: 33,720
    • 2012: 28,170
    • 2013: 29,720
    • 2014: 29,480
    • 2015: 27,540

In other words, the ACS is projecting a 24 percent decrease in the absolute incidence of prostate cancer in America in 2016 compared to 2013, and a 12 percent decrease in the absolute mortality over the same time period.

Now the ACS has been very consistent in explaining that, because their projections are based on models, one shouldn’t really make this sort of direct year to year comparison of its estimates for incidence and mortality. But of course if one really can’t make such comparisons, then what is the point of reporting these data at all? The other thing we never get is a later report of exactly how many people did die of a specific cancer in a particular year. All one can get for comparative purposes are the age-adjusted incidence and death rates per 100,000 of the relevant population.

Exactly why the number of men who will be newly diagnosed with prostate cancer in 2016 should drop so dramatically is hard to understand. Siegal et al. state that the decline in prostate cancer incidence reflects “recent rapid declines in prostate cancer diagnoses.” But it has been 4 years since the USPSTF recommended the abandonment of mass, population-based screening for risk of prostate cancer. Why would this suddenly have such impact in 2016? Could there be a problem with the model? The absolute number of men entering the prime age range for risk of a diagnosis with prostate cancer — as a consequence of the “baby boom” after World War II — is (at least arguably) still rising!

There are some who are going to argue that the ACS estimates are finally taking account of the USPSTF recommendations on prostate cancer screening, but that does not necessarily appear to be the case. Our suspicion is that this massive projected decrease in the number of new prostate cancer diagnoses for 2016 needs to be taken with at least a small pinch of salt.

If we can’t compare the absolute incidence and mortality data created by the ACS from year to year with any expectation of reasonable accuracy, perhaps what should really be reported each year is the projected, age-adjusted incidence and mortality rates per 100,000 of the relevant population. At least those could be accurately compared later to the actual numbers, so that we would have some idea of the accuracy of the original projections.

10 Responses

  1. The statistics behind this are above my pay grade as well, but they do evaluate the accuracy periodically — the last time in 2012. Reliable incidence data comes from the North American Association of Central Cancer Registries (NAACCR) and are available down to the county level, but those data are 4 years behind because of reporting delays and the cumbersome process of collecting and processing all that. They then look at a large number of variables that may predict incidence including age, gender, race, county of residence, cancer site, and year of diagnosis on the individual level. On the county level, they included also: rates of cancer screening, income, education, housing, racial distribution, foreign birth, language isolation, urban/rural status, land area, and Census division; availability of physicians and hospitals; health insurance coverage and rates of cigarette smoking, obesity, vigorous activity, and rates of mortality due to the same type of cancer. In all they evaluated 30 such variables, to find the most predictive ones. They apply estimates of those variables for the projected year to come up with a best estimate. Because they changed this method in 2012 (in particular to be more accurate for breast, prostate and lung cancer), they will be able to evaluate in 2016 how well the revised model worked.

    The mortality data comes from the National Center for Health Statistics and from the SEER database. It is also 4 years behind and must be projected. In 2012 they changed the projection model for this statistic as well to better fit retrospective data. Next year we should also start to know how well it did.

    Because they are using the most recent past year data on screening in their incidence projections, it may very well be that screening recommendations are a cause of the decreased incidence, but there are a lot of other factors too. For mortality, I think we’re seeing the effect of longer lead times, earlier treatment of potentially high-risk prostate cancer, more curative radiation therapy, and extended survival with docetaxel, second-line hormonal agents, and other medicines gaining wider acceptance.

    A statistic I would like to see is the median prostate cancer mortality (in years) and other cause mortality (in years) for men diagnosed in each year. Of course this would be lagged quite a few years because of the long natural history. So, for example, how long did it take for half the men diagnosed in 1995 to die of prostate cancer, and how many died of other causes first. What about those diagnosed in 1996, 1997, etc? I think that would quantify how much better how well we’re getting at treating prostate cancer because it corrects for variances in detection.

  2. Allen:

    I suppose that this is really my point. What’s the good of developing projected future data if it hasn’t been done in a way that is actually carefully designed to correlate, in a meaningful manner, with the actual data (which inevitably takes a lot longer to collect and assess)?

  3. I’d love to know the database numbers of how many males in the US get DRE/PSA tests per year now compared to the last decade.

  4. So would a lot of people, but getting accurate evaluations of the numbers of PSA tests each year is actually very difficult — and it would be meaningless unless one know the precise reason for the PSA test being given in each patient on each occasion. Remember that PSA tests can be given for a very wide range of reasons, including many that have nothing to do with risk for prostate cancer. Getting accurate numbers for how many men got DREs is probably impossible. It is likely that many millions of DREs are carried out each year as one component of a general health check-up, and so we don’t even have billing data that could be used to identify that a DRE has been carried out.

  5. Maybe someday after electronic health records (EHRs) have universally been established in the US. I think the government has set 2020(?) as the final date for universal adoption, and has already started rewarding doctors for meaningful use, and fining them if they don’t. Eventually, that will be an enormous and useful source of Big Data. I’ve already seen a few studies from some big institutions that have implemented it.

  6. Allen (and Houston):

    We already have a serious problem with the universality of EHRs. The EHR systems providers are doing everything they can to make sure that there is little to no compatibility between the various systems. And just the other day HHS acknowledged that “meaningful use” as it was originally conceived is never gonna happen — although it is unclear exactly what they intend to do about it.

  7. An issue that has always concerned me is how many men who die from prostate cancer actually have that recorded as their cause of death.

    With regard to my three friends who passed away in 2015, and for whom I advocated, I did my best to make sure the death certificate correctly reflected our insidious disease.

    Perhaps ACS should reduce those multi-million dollar salaries paid to Brawley and others, and use the saved funds to develop a new model. Otherwise we descend into a self-fulfilling spiral where Brawley recommends against PSA testing, tells us it has led to less disease and fewer deaths, and gets paid more for doing such an excellent job …. Hello?!?

  8. Amen! Couldn’t have stated it any better than Rick D did in his last paragraph above. This just reignites my pre-existing outrage and distain for the USPSTF, ACS, Brawley, and ACS’s deceptive fundraising tactics.

    Thanks for allowing me to vent on this most critical issue of support for PSA screening (and baseline screening) for asymptomatic men!

  9. I just read about a Senate bill that would help fix the compatibility issue you cited.

  10. Allen:

    I think we are going to need rather more than another piece of Congressional action. See this editorial commentary in this week’s NEJM.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this: