How doctors interpret published data from clinical trials of new drugs


Although it has nothing to do with prostate cancer in particular, there is an interesting article in this week’s issue of the New England Journal of Medicine that assesses how physicians interpret the published results of clinical trials of new drugs — based on the quality of the trials (their “rigor”) and who put up the money to carry the trial out (“funding”). The full text of this paper by Kesselheim et al. is available on line.

Why are we bringing this paper to your attention? Because we think it will help patients and caregivers understand why their doctors are more likely to take some published studies more seriously that others.

Kesselheim and his colleagues constructed summaries (“abstracts”) of three fictitious clinical trials of three fictitious new drugs:

  • A “high rigor” trial, which had these key characteristics
    • The “new” study drug was compared to an older “active” drug; the trial was randomized and double-blind; > 5,000 patients were enrolled (but < 9 percent dropped out); the study end-point was death (“mortality”); patients were followed for 36 months; the drug was reported to be safe (as well as effective).
  • A “medium rigor” trial, , which had these key characteristics
    • The “new” study drug was compared to an older “active” drug; the trial was randomized but only single-blind (meaning that the patients didn’t know what drug they were getting, but the doctors did); only 964 patients were enrolled (and 13 percent dropped out); the study end-point was a surrogate for mortality (e.g., progression-free survival); patients were followed for 12 months; the drug was reported to be safe (as well as effective).
  • A “low rigor” trial, , which had these key characteristics
    • The “new” study drug was compared to “usual care”; the trial was randomized but “open label” (i.e., everyone knew whether the patient was being given the study drug or not; only 483 patients were enrolled (and 19 percent dropped out); the study end-point was a surrogate for mortality; patients were followed for just 4 months; there was no report on the safety of the new drug.

In addition, each of the trials was clearly indicated to have (or hot have) certain types of funding:

  • From the National Institutes of Health (NIH)
  • From industry (i.e. from a drug company)
  • No external funding at all

The mathematicians among our readers will easily calculate that the authors were therefore able to generate a total of 27 different fictional trial abstracts. The reason for using abstracts is that we know that physicians often only have time to read abstracts of papers before deciding whether to read entire articles.

Between July and October 2011, the research team was able to enroll 503 board-certified physicians who were each asked to review one abstract of a trial of each of the three fictional drugs, assigned at random. The findings are as follows:

  • 269/503 physicians (53.5 percent) enrolled actually completed the survey.
  • Respondents were clearly able to distinguish between the rigor levels of the trials (see Figure 1 of the study).
  • Respondents were clearly influenced by the fact that trials were funded by industry, and were less likely to describe such trials as “important” than trials funded by the NIH (see Figure 2 of the study).
  • Physicians who believed strongly that industry funding may influence trial results were much less likely to prescribe the hypothetical new drugs than physicians who were less convinced that industry funding could influence trial outcomes (regardless of the rigor of the trial).

The authors conclude that the physicians who completed this survey “understood and appreciated methodologic differences” when they read the fictitious abstracts of the hypothetical new drugs provided to them. They also concluded that, “respondents downgraded the credibility of industry-funded trials” compare to those supposedly funded by the NIH or having no funding source listed.

If we apply the “rigor” criteria to this actual study (to the extent that we are able), we should note that while this trial was very well designed and conducted, and was independently grant-funded (full details are available at the end of the paper), it did have two significant problems, which was its size (an initial enrollment of “only” 503 physicians) and its very high drop-out rate (of 46.5 percent). It should also be noted that the study was focused on “internists” (specialists in internal medicine) only and that, as the authors are careful to point out, it may be inappropriate to generalize the results of this study to other specialist groups (e.g., oncologists or urologists).

The authors are also careful to point out that just because a trial is funded by industry does not necessarily imply that it cannot be conducted with a very high level of rigor. Industry has — in the past — not always been as careful to ensure the rigor of clinical trials of new drugs as they really ought to be. However, as far as prostate cancer is concerned, the rigor of some of the most recently conducted Phase III trials (like those completed in the past couple of years for abiraterone acetate and enzalutimide) appear to have been conducted with a very high level of rigor.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

<span>%d</span> bloggers like this: