A medical oncologist’s views on the USPSTF draft recommendations


An article by a highly regarded specialist in medical oncology in the most recent issue of The ASCO Post discusses recent recommendations from the U.S. Preventive Services Task Force (USPSTF).

Derek Raghavan, MD, PhD, was appointed president of the new Levine Cancer Institute (established by the Carolinas HealthCare System) in December 2010. He has had a prestigious career as a medical oncologist with special interests in urologic cancers.

In his article in the November issue of The ASCO Post (“Do we need the USPSTF?“) he expresses significant concern not so much with the actual findings of the USPSTF — which he acknowledges may well be valid — but with the way in which the USPSTF’s findings are released and publicized in draft form, with no prior input from specialist societies that are more likely to reflect concerns about the interpretation of the recommendations made.

Dr. Raghavan’s viewpoint seems highly appropriate to us. Whether the USPSTF’s findings are accurate or not, the degree of publicity they are bound to receive suggests that the USPSTF should have a less public level of scrutiny by interest groups prior to their release to the public in general, and it may well be that the way the recommendations are currently framed could be significantly improved.

The new Levine Cancer Institute will be headquartered in Charlotte, NC, and is scheduled to open in 2012.

11 Responses

  1. A consumer perspective: The USPSTF’s current process is a rare chance that consumers have to see an analysis of the science that is not filtered through the lens of those with a vested interest of any kind (financial, disciplinary, or true believer).

    Based on the relentless outpouring of personal anecdotes and melodramatic hyperbole from both the treatment and advocacy communities, it’s evident that it’s virtually impossible at the institutional level, and in many cases at the individual level, for the prostate cancer “community” to quit rationalizing beliefs about what prostate cancer is and isn’t and how it should be treated and not treated that are no longer validated by the current science — at least not publicly.

    And it’s a real shame, and it was a terribly disappointing response — indicative of decades more over-treatment and unnecessarily maimed men — because literally hundreds of thousands of very real people will pay the price of the arrogance of the backward facing. To simply pretend that the maiming outcomes of over-treatment are trivial in 47 men, in order to make the “save” in one man is a pretty intellectually and ethically impoverished way to address a complicated disease.

  2. Another case of shooting the messenger. A recurring complaint about the USPSTF recommendation by both physicians and advocates is that the panel didn’t include any urologists or oncologists, typically referred to (as in this statement) as the “experts” who must be consulted before making a recommendation. That is a logically indefensible complaint. Urologists and oncologists must provide the care, perform the tests, do the diagnoses, and attribute the mortality. After that, it is a statistical problem and the only experts who must be consulted are qualified statisticians. And that is exactly what the USPSTF did. And they presented results exactly as they should have, simply stating the results devoid of any emotion or hyperbole. They should be praised for following rigorous scientific standards, not attacked because various groups and individuals don’t like the answer that rigorous scientific investigation gave.

    Since the recommendations were released, the urology community in particular has clearly demonstrated why an entirely independent review was critical. I have no doubt that vast majority of practitioners believed that thousands of lives have been saved every year through screening and subsequent treatment. But supporting that claim now by quoting things such as 5-year survival rates between screening and non-screening diagnosed populations (laughable selection bias) or the stage at which a diagnosis was made (confirmation bias) indicates that they either do not understand basic statistics, cannot admit to themselves (or anyone else) that they have likely been doing more harm than good, or are simply lying to further an agenda. Whichever is the case, they have far too much personal interest (emotional or financial) to provide unbiased input to the panel. And all the ad hominem attacks and appeals to emotion in the world are not going to change that fact.

  3. Jim, Thank you so much! Beautifully stated! I hope your message won’t fall entirely on deaf ears.

  4. Unfortunately, there are some serious statistical problems with the USPSTF report.

    For example, the discussion in the report, and by some Task Force members in public comments, of the lack of statistically significant effects on overall mortality, is a mistake from a statistical standpoint.

    Given the likely rates of overall mortality in these groups of men, there is no way that these studies have sufficient sample size to detect plausible effects of prostate cancer screening on overall mortality.

    For example, the European study estimated a statistically significant effect in reducing prostate cancer mortality of 0.07% as of an average follow-up of 9 years after screening began. If overall mortality is 10%, for a statistical analysis to have reasonable “power” to detect a reduction of 0.07% in overall mortality would require a sample size of millions of participants. Such a large-scale study will never occur.

  5. That is true, but it is most definitely not a statistical mistake made by the panel. It is exactly the point. Any overall benefit that might actually exist is so tiny that even with a study of over 100,000 men it is not statistically significant. The harms, on the other hand, are clear. This is the “D” rating, fair evidence that the harms outweigh any benefits.

  6. Statistical insignificance is not the same as substantive insignificance.

    You would be right if the benefits and costs of screening were mainly due to the screening itself, rather than the treatment. The 0.07% reduction in prostate cancer mortality means that the “number needed to screen” (NNS) to reduce one prostate cancer death is high, at over 1,000 screened to reduce mortality by one death. If screening itself had high costs, you would be right that this number would mean that harms outweigh benefits.

    However, the major costs, both financial and in terms of health, of screening are not due to the screening in and of itself, but to the subsequent treatment decisions. So the key issue is whether the “number needed to treat” (NNT) to reduce mortality by one prostate cancer death is unacceptably high for treatment that stems from widespread screening. That is, if we need to treat a lot of people to reduce one prostate cancer death, then we have to weigh the side effects for all those treated versus the reduced probability of death.

    Although the European study suggests that the NNS is > 1,000 to 1, the NNT as of an average 9-year follow-up is 48. This implies a 2% lower probability of death after 9 years if treatment is undergone after screening. Furthermore, the data imply that the screening and control arms have probabilities of prostate cancer death that begin to diverge at 7 years and are continuing to diverge up to 12 years after screening begins, which is about the maximum follow-up for which the European study has any power to detect anything. As of 12 years, the NNT is 18. This implies a 5% reduction in the probability of death as of 12 years after screening has begun, conditional on the man having a prostate cancer identified via screening. (The time interval after the actual treatment would obviously be less than that.)

    So the tradeoff is a 2% reduced risk of death after 9 years, and 5% reduction in risk of death after 12 years, versus a 30 to 50% risk of serious side effects. Although the risk of side-effects is much higher than the probability that the treatment will save a life, most people would rather be alive with the side-effects than dead.

    I think different men will make different choices when faced with that tradeoff. I think men should be informed before they decide on screening that it may lead to some very difficult treatment decisions. If someone believes they would be likely to risk death rather than risk the side-effects, then there probably is not much point to them doing the screening. That is why I think the appropriate grade for PSA screening is a C grade, which recognizes that the appropriate choice may vary across men.

  7. Dear Tim:

    I think that if you are going to carry out the analysis at that level of detail, you also have to take account of the side effects and mortality rate (which are far from negligible) associated with biopsy prior to any treatment.

    As far as I am aware , none of the trials conducted to date have ever included this information, which is, of itself, rather disturbing (at least to me!).

  8. According to the European study, “Deaths that were associated with prostate-cancer–related interventions were categorized as deaths from prostate cancer.” Therefore, any deaths due to biopsy were included as a prostate cancer related death. So, when the study estimates a reduction in prostate-cancer related deaths in the screening arm relative to the control arm, this is net of any increase in deaths due to increased biopsies, or due to increases in surgery-related mortality.

    However, I agree with you that a complete benefit cost analysis should include the costs of biopsies, which are obviously greater on a per-procedure basis than the costs per PSA test. These costs are not just mortality, but any problems caused by the biopsy.

    The evidence does suggest that the major benefits and costs of PSA screening are due to the subsequent treatment, not the screening itself or even the biopsy. For example, the recent piece in the NEJM by Brett and Ablin cite a figure of a $5.2 million cost for screening to reduce prostate cancer deaths by one. This cost figure probably comes from a study by Shteynshlyuger and Andriole in the Journal of Urology, March 2011. If one looks at this study, over 90% of these costs are treatment costs. I should also note that this study’s baseline costs per saved life assumes an NNT of 48. If the NNT is lower, one gets quite different costs per life saved.

  9. Dear Tim:

    I am not actually as sure as you seem to be that “Deaths that were associated with prostate-cancer–related interventions” included deaths associated with prostate biopsy since it does not state that explicitly in the original study results. After all, at the time that the biopsy was being carried out, there was no diagnosis of prostate cancer, and so men without prostate cancer who may have died as a consequence of biopsy might not have been included.

    The section on Methods in the actual trial report states that, “The monitoring committee received reports on the progress of the trial, including prostate-cancer mortality. Causes of death, which were obtained from registries and individual chart review, were assigned according to definitions and procedures developed for the trial.”

  10. Dear Tim Barik,

    With all due respect, you are the one making the statistical mistakes. First, the only thing that can be analyzed is the effect of screening on the population since that is the only parameter that was controlled. Anything else is part of the system response. You can hypothesize that a different system would give better results, but that is irrelevant here since the current system was studied. Deriving meaningful results for a different system would require a completely different study that begins anew.

    Second, no one has argued that a lack of statistical significance proves a complete lack of benefit. No statistical test can do that (hence the phrase “you can’t prove a negative”). But what it can do is show that any effect is too small to reject the null hypothesis at a given confidence level. As you note, the maximum level depends upon the statistical power of the study. A study that has over one hundred thousand participants has considerable statistical power. Any benefit on overall mortality is necessarily extremely small, if it exists at all.

    Finally, the 48 NNT does not mean that a man who chooses to be treated for a screen-detected cancer has reduced his chances of dying by 2%. Men die of many things, and prostate cancer is only a small contributor. Some treated men will die of other causes, and some will die of prostate cancer despite treatment. And if, as the Sitemaster points out, the effects of diagnosis and treatment contribute to the death rate due to other primary causes (how many men die prematurely of, say, heart disease because they were weakened by their treatment?), then it is also possible that the treatment leads to an increase in overall mortality. With no statistically significant results, it is equally likely that there is a tiny increase in overall mortality due to screening as there is a tiny decrease.

    As a last point, the Sitemaster has recently presented numbers that show the 30%-50% rate of side effects in treated men is likely highly understated.

  11. At at this point I think we can very reasonably terminate discussion on this topic on the grounds that wer are never going to reach a clear conclusion.

Comments are closed.

%d bloggers like this: