How long is long enough? Length of follow-up on clinical trials for primary treatments

Many of us are faced with the difficulty of choosing a primary therapy based on data from clinical trials with follow-up shorter than our life expectancy. How can we know what to expect in 20 or 30 years? This is quite apart from the fact that most published studies only tell us how the treatment worked for a chosen group of patients treated by some of the top doctors at some of the top institutions — they never predict for the individual case that we really want to know about; i.e., “me.” The issue of length of follow-up is particularly problematic for radiation therapies, although it may be too short for surgery and active surveillance studies as well. How can we make a reasonable decision given the uncertainty of future predictions?

I may have missed some studies, but the longest follow-up studies I have seen for each primary therapy treatment type are as follows:

Note that Mt. Sinai Hospital published data from a study of LDR BT patients with longer follow-up (15 years); however, all patients in that study were treated from 1988 to 1992, before modern LDR BT methods were used, and such results are irrelevant (see below) for decision-making today.

On a personal note, I was treated at the age of 57 and had an average life expectancy of 24 years, possibly more because I have a healthy lifestyle and no co-morbidities. So there were no data that could help me predict my likelihood of cause-specific survival and quality of life out to the end of my reasonably expected days. What’s more, the therapies with the longest follow-up (ORP, EBRT + HDR BT boost) also have the highest rates of serious side effects. With my low-risk cancer, there seemed little need to take that risk with my quality of life.

While we may be tempted to wait for longer follow-up, (1) we don’t always have that luxury, and (2) there very likely will not be any longer follow-up. Not only is follow-up expensive, there are also the problems of non-response, drop-outs, and death from other causes. The median age of patients in radiation therapy trials is typically around 70 years, so many will leave the study. The 10-year UCLA study, for example, started with 448 patients, but there were only 75 patients with 10 years of follow-up. The “10-year” study of IMRT at MSKCC started with 170 patients, but only 8 patients were included for the full 10 years! After the sample size gets this small, we question the validity of the probability estimates, and there is no statistical validity in tracking further changes. (It is worth noting that IMRT became the standard of care without longer term or comparative evidence.)

An even bigger problem is what I call irrelevance. Technological and medical science advances continue at so brisk a pace that the treatment techniques 10 years from now are not likely to resemble anything currently available (another argument for active surveillance, if that’s an option). Dose escalation, hypofractionation, image-guidance technology, intraoperative planning, VMAT, variable multi-leaf collimators, on-board cone-beam CT, and high precision LINACs — all innovations that have mostly become available in the last 15 years — have dramatically changed the outcomes of every kind of radiation therapy, and made them totally incomparable to the earlier versions. Imagine shopping for a new MacBook based on the performance data of the 2000 clamshell iBook. By the time we get the long-term results, they are irrelevant to the decision now at hand.

What we want to learn from long-term clinical trials are the answer to two questions:

  1. Will this treatment allow me to live out my full life?
  2. What are the side effects likely to be?

To answer the first question, researchers look at prostate cancer-specific survival. It’s not an easy thing to measure accurately — cause of death may or may not be directly related to the prostate cancer. We usually look at overall survival as well. For a newly diagnosed, intermediate-risk man, prostate cancer survival is often > 20 years, so we can’t wait until we have those results to make a decision. Taking one step back, we look at metastasis-free survival, but that is often > 15 years. Sometimes there is clinical evidence of a recurrence before a metastasis is detected (e.g., from a biopsy or imaging). More often, the only timely clue of recurrence is biochemical — a rise in PSA at some arbitrary point in time post-treatment. That point is set by consensus. Researchers arrived at the consensus after weighing a number of factors, especially its correlation with clinically-detected progression. Biochemical recurrence-free survival (bRFS), or its inverse, biochemical failure (BF), is the most commonly used surrogate endpoint.

We might be comfortable if outcomes seem to have reached a plateau. For some of the above studies, we are able to look at some of the earlier reported BF rates compared to those measures reported at the end of the study (ideally broken out by risk group).

  • In the UCLA study of HDR BT monotherapy, the 10-year results are virtually unchanged from the 8-year results.
  • In the Kiel study of HDR BT boost, the 5-, 10-, and 15-year BF rates were 22, 31, and 36 percent respectively.
  • In the RCOG study of LDR BT boost, the 10-, 15-, 20-, and 25-year BF rates were 25, 27, 27, and 27 percent respectively.
  • In the Mt. Sinai study of LDR BT, the 8- and 12-year BF rates were 12 and 10 percent for low risk patients; 19 and 16 percent for intermediate-risk patients; and 35 and 36 percent for high-risk patients, respectively.
  • In the MSKCC study of IMRT, the 3-, 8- and 10-year BF rates were 8, 11, and 19 percent for low-risk patients; 14, 22, and 22 percent for intermediate-risk patients; and 19, 33,and 38 percent for high-risk patients, respectively.
  • In the Katz SBRT study, the 5- and 7-year BF rates were 2 and 4 percent for low-risk patients, 9 and 11 percent for intermediate-risk patients, and 26 and 32 percent for high-risk patients, respectively.
  • For comparison, the 5-, 10-, 15-, and 25-year recurrence rates for ORP at Johns Hopkins were 16, 26, 34, and 32 percent.

For most of the therapies (HDR BT and LDR BT monotherapy; EBRT + LDR BT boost, and SBRT) the failure rates remained remarkably consistent over the years. However, for surgery and IMRT, failure rates increased markedly in later years. Most of us can’t wait 25 or more years to see if a therapeutic option remains consistent or not, and for radiation, the results would almost assuredly be irrelevant anyway.

Ralph Waldo Emerson is misquoted as saying, “Build a better mousetrap, and the world will beat a path to your door.” An important criterion for decision-making when there are only limited data is our answer to the question: Is this a better mousetrap?

  • Arguably, robotic surgery was only an improvement over open surgery, and not an entirely new therapy requiring separate evaluation. It has never been tested in a randomized comparison, and I doubt we will ever know for sure.
  • Arguably, IMRT was simply a “better mousetrap” version of the 3D-CRT technique it largely superseded and didn’t need a randomized comparison to prove its worth.
  • Was HDR BT monotherapy just an improvement over EBRT + HDR BT boost?
  • Was SBRT just an improvement over IMRT, or should we view it as a variation on HDR BT, which it radiologically resembles by design?

There are no easy answers to any of these questions. However, as a cautionary note, I should mention that PBRT was touted as more precise because of the “Bragg peak effect,” yet in practice seems to be no better in cancer control or toxicity than IMRT.

There is also the problem of separating the effect of the therapy from the effect of the learning curve of the treating physician. Outcomes are always better for patients with more skilled and more practiced physicians. The learning curve has been documented for open and robotic surgery, but less well documented for radiation therapies. Patients treated early (and perhaps less skillfully) in a trial are over-represented in the latest follow-up, and there may be very little follow-up time on the most recently (and perhaps more skillfully) treated patients.

So when do we have enough data to make a decision?

That comfort level will vary among individuals. I was comfortable with 3-year data based on choosing a theoretically “better mousetrap”, and many brave souls (thank God for them!) are comfortable with clinical trials of innovative therapies. In the end, everyone must assess for himself how long is long enough. For doctors offering competing therapies and for some insurance companies, there never seems to be long enough follow-up. I suggest that patients who are frustrated by those doctors and insurance companies challenge them to come up with concrete answers to the following questions:

  • What length of follow-up do you want to see, and why that length?
  • What length of follow-up was used to determine the standard of care?
  • Do you need to see prostate-cancer specific survival, or are you comfortable with an earlier surrogate endpoint?
  • What is the likelihood of seeing longer term results, and will there be any statistical validity to them if we get them?
  • Have outcomes reached a plateau already?
  • What evidence is there that toxicity outcomes change markedly after 2 years?
  • Will the results still be relevant if we wait for longer follow-up?
  • Is the therapy just a “better mousetrap” version of a standard of care?
  • Are my results likely to be better now that there are experienced practitioners?

Editorial note: This commentary was written for The “New” Prostate Cancer InfoLink by Allen Edel.

18 Responses

  1. There are very distinct differences between the value of long-term follow-up data to individual patients at a specific point in time (when they need to make a very personal decision about how they want to be treated) and the value of long-term follow-up data to the recommended practice of medicine (i.e., should treatment X continue to be used to manage disorder Y?).

    Allen does not say this explicitly in his article, but there are, in fact, some very real questions about the value of radical prostatectomy as a primary treatment for most men diagnosed with localized prostate cancer today. Other forms of treatment may well be offering us similar levels of effectiveness for many or even most patients who actually need treatment — with a lower risk for complications and side effects over time.

    Arguably, this is one more reason why we need data from the type of registry trial that I outlined here the other day.

    If it were to be shown categorically that other forms of management actually are just as effective and safer than radical surgery as primary treatments for localized, but clinically significant prostate cancer, this would be a major setback for many in the profession of urologic surgery. But it would hardly be the first time that there has been a major reassessment of the “best” way to treat certain types of widespread disorder in recent history. The profession of gastroenterology is alive and well despite the discovery in the early 1980s that many forms of ulcer are actually caused by a bacterium (Helicobacter) and can be effectively treated with antibiotics — but this was not an easy pill for the gastroenterology community to swallow at the time!

  2. Regarding the EBRT plus seeds combination

    Hi Allen, thanks for sharing your thoughts on the issue of follow-up and various kinds of therapy. I’m focusing on the EBRT plus seeds combo with emphasis on a different study that I found highly encouraging a few years ago. I’ll first quote some of your remarks about this combination. You wrote:

    “… the longest follow-up studies I have seen for each primary therapy treatment type are as follows: … EBRT + LDR BT boost — 25 years (Radiotherapy Clinics of Georgia [RCOG]). … We might be comfortable if outcomes seem to have reached a plateau …. In the RCOG study of LDR BT boost, the 10-, 15-, 20-, and 25-year BF rates were 25, 27, 27, and 27 percent respectively. …”

    Here is a major shortcoming of the RCOG study: it is not stratified by risk. As we have come to know, including a substantial proportion of low-risk patients will boost success figures considerably, and, arguably, artificially, as many of the low-risk patients would have done fine with no treatment. Based on earlier studies of his group of consecutive patients by Dr. Critz, with stage and PSA but not Gleason data provided, it appears that a hefty proportion of these patients were low-risk (both stage and median PSA indicated a generally low-risk group but Gleason scores were not provided in the abstracts).

    In sharp contrast, a study by Dattoli et al., published 5 years ago, reviewed a similar combination EBRT plus seeds strategy but with EBRT done first, with intermediate- and high-risk men the focus of the study, which included no low-risk men!

    Here are key sentences from the abstract:

    “Methods. 321 consecutive intermediate and high-risk disease patients were treated between 1/92 and 2/97 by one author (M. Dattoli) and stratified by NCCN guidelines. 157 had intermediate-risk; 164 had high-risk disease. All were treated using the combination EBRT/brachytherapy ± hormones. Biochemical failure was defined using PSA > 0.2 and nadir +2 at last followup. Nonfailing patients followup was median 10.5 years. Both biochemical data and original biopsy slides were independently rereviewed at an outside institution. Results. Overall actuarial freedom from biochemical progression at 16 years was 82% (89% intermediate, 74% high-risk) with failure predictors: Gleason score (P = .01) and PSA (P = .03).”

    Frankly, as the median follow-up for this upper risk group was well beyond the 5-year point where radiation results tend to plateau, this study was highly credible to me, eclipsing even the “25 years” RCOG study. I’m not much interested in studies that mix in large proportions of low-risk men in their study populations as such groups have relatively little challenge for a therapy.

    In 2011-2012 I actually planned to have the Dattoli/Sorace team treatment my high risk case. However, outstanding and unexpected negative scan results (Feraheme USPIO for lymph nodes and Na18F PET/CT for bones), the availability of a well regarded local TomoTherapy radiation team, and indications of more side effects for the IMRT/seeds combo led me to decide to have TomoTherapy, which I had in 2013.

  3. “We might be comfortable if outcomes seem to have reached a plateau.”

    Of course, with radical prostatectomy there is no plateau as we can see from the ERSPC cumulative hazard of death curves that continue to diverge for as long as there is follow-up. We expect this because there is no longer any prostate gland after RP to produce any new prostate cancers in the future and as we all should know, the risk of a prostate gland producing dangerous cancers increases very rapidly with age.

  4. Unfortunately we have no idea going in with any treatment what may occur within us years later following a treatment.

    Interestingly, as retired military I was sent a set of questions by the Department of Defense following my salvage radiation treatment. It asked many questions as to what I was subsequently experiencing as side effects — particularly any rectal problems — that I had no answer to that early following the radiation. If I were asked similar questions now, I would have answers to what I subsequently experienced years later. I have total incontinence now that I don’t think was caused by the radiation, but between the initial open surgical removal of my prostate gland that included both neurovascular bundles, and subsequent ADT medications with particularly Zytiga having had an effect on my bladder and urinary sphincter muscles, has resulted in the now total incontinence. And more recently within the last couple years (after 20 years post-salvage radiation) I am experiencing both constipation and the inability of my rectal muscles to sufficiently help work bowel movements through the colon. Point being, many side effects do not become apparent until several years later.

  5. Dear Michelle:

    I think we need to be very clear that the ERSPC was a study of whether screening for prostate cancer would affect risk for prostate cancer-specific mortality. It there fore had nothing to do with how the patients were treated if they were identified as having prostate cancer.

    Indeed, in some of the groups of patients deliberately enrolled in the trial (e.g., the Goteborg group from Sweden) there had been almost no use whatsoever of the PSA test to seek men at risk for prostate cancer prior to initiation of the ERSPC, and so many of the men identified as having prostate cancer in the Goteborg group (and others) were initially diagnosed with advanced forms of prostate cancer that would never have been amenable to curative treatment with radical prostatectomy in the first place.

    As a consequence, we really cannot use data from the ERSPC trial to make any sort of informative statement about the effectiveness or safety of any particular form of treatment for localized prostate cancer unless that subset of the patients and the ways in which they were treated are carefully separated on predefined basis.

  6. Dear Chuck:

    I think we also have to be very clear that, as we age, our bodies cease to do efficiently all sorts of things they did very efficiently even just a few years earlier.

    If I am correct you are now 83 years of age, having been initially diagnosed in 1992 (when you were 59). As someone who has lived through the aging of several close relatives who lived into their 80s and 90s — with no sign of or treatment for prostate cancer, I have to say that problems like severe incontinence and constipation are not exactly unusual at that type of age, and may have little or nothing to do with any treatment you may have received for your prostate cancer over the past 20+ years (but it is going to be almost impossible for anyone to tell).

    On the up side, of course, I hope my brain is still functioning as well as yours appears to be in another 15 years time!


  7. Jim,

    The purpose here was not to evaluate the validity of one study vs. another, but to discuss the issue of length of follow-up. It would have been impossible to stratify the RCOG study with 25 years of follow-up into low-, intermediate-, and high-risk groups because no such groups existed for the men in that study at baseline. D’Amico didn’t devise those strata based on PSA, stage, and Gleason score until 1998. It has subsequently been refined by NCCN, and I expect we’ll see further revision. They can’t go back and re-assign the men retrospectively either because the men with 25 or more years of follow-up were treated from 1984 to 1987, which was before the PSA test became available. (The study was published in 2012.)

    It’s pretty safe to guess that most of the men with follow-up that long most likely presented with what we now call high-risk prostate cancer. So you see that, rather than being in “sharp contrast,” as you said, the two studies are in almost perfect agreement.

  8. Risk Group Comparison of the Critz Study Versus the Dattoli Study (follow-up to my comment of November 7, 2015 at 7:54 pm and Allen’s comment just above of November 9, 2015 at 1:28 am)

    Hi Allen, I was pretty sure of my ground since I had lived through the period of these studies and publications by Critz and Dattoli since 2000, both rather strong competitors, but I wanted to take a look at the complete Critz paper to see if I could tease out approximations of risk groups. It turns out that was unnecessary: the complete paper has nice freedom-from-recurrence graphs by risk group in Figure 2. We were both led astray as that fact was not clear in the abstract, a significant fault in my view. The Critz group actually did have Gleason scores, at least for most of the patients, including all 3,117 (of the total of 3,546) treated from 1992 to 2000, using transperineal seed implant for all instead of the earlier retropubic implant technique. Figure 2 states specific percentage results for success (freedom from recurrence) at 5, 10, and 15 years accompanying the graph curves, as well as stating the number of these transperineal patients in each risk group:

    — Low-risk: 1,466
    — Intermediate risk: 1,056
    — High risk: 353

    Here are some comparisons with the Dattoli results:

    Dattoli (from abstract): “Nonfailing patients followup was median 10.5 years [16 years total follow-up]. Both biochemical data and original biopsy slides were independently re-reviewed at an outside institution. Results. Overall actuarial freedom from biochemical progression at 16 years was 82% (89% intermediate, 74% high-risk) with failure predictors: Gleason score (P = .01) and PSA (P = .03).”

    Versus Critz at 10 years, for more modern portion of the group — transperineal seeding: intermediate 74%, high risk 44%

    A notable point is that the definition of failure was different, with the Dattoli team using a standard definition for radiation therapy while the Critz team used a close approximation of the surgery failure standard (basically, exceeding a PSA of 0.2). While the Critz results would no doubt look somewhat better if the same definition as in the Dattoli study or another standard radiation definition were used (more tolerance for temporary “bounces”, etc.), the differences in success at 10 years are pretty dramatic for both the intermediate- and high-risk groups, as well as stark differences in the flatness versus continuing decline in the success curves as time goes by.

    Indeed, while the Critz results continue to fall substantially between 5 and 10 years (from 81% to 74% for the intermediate group, and from 54% to 44% for the high-risk group, both groups falling only slightly after that), the Dattoli results are robust early: virtually flat for all Gleason score groups by the 5-year point, except for the Gleason 8 group, which flattens out at the 8-year point! That is strong evidence that at least for the Dattoli patients, 10-year follow-up is ample. (It is also evidence, in my view at least, that the Dattoli approach is clearly superior.)

    There are also other studies that could be examined here, such as the study from the Sylvester, Grimm, … Merrick group with 15-year follow-up from 2007 (print version).

  9. Dear Jim (and Allen):

    This type of inter-cohort analysis is near to pointless. Both cohorts are inevitably affected by patient selection biases. On top of which, since the two clinical teams use utterly different definitions of freedom from biochemical progression, you can make no reasonable comparison anyway.

  10. Jim:

    I agree with Sitemaster that two separate studies on different patient groups are inherently incomparable, although one may confirm or call into question the findings of another. I have no idea how you can assign D’Amico risk groups without PSA data. Of course they had Gleason score, and staging (according to outmoded standards), but they didn’t have PSA — you can’t just make it up and pretend you know — that’s totally meaningless.

  11. Reply to Allen’s Comment of 11/16 2:51 pm re PSA Data in the Critz Study, Other Characteristics

    Allen, the Critz study DID have and use PSA data for all 3,117 of the tranperineal implant patients. That is fully in line with the availability of the PSA test for some years when the Critz group switched implant techniques from retropubic to tranperineal, at some time during 1992. I’m looking at a chart from the NCI based on SEER data from 1975 to 2009 that was published by reporters Berkowitz and Clark in The Washington Post of February 26, 2013, 12:40 pm (must have been online). It shows a sharp favorable bend in prostate cancer 5-year survival beginning just about 1986, the year the FDA approved the PSA test for monitoring prostate cancer. That curve continued upward on its steep slope, posting superior survival rates to bladder cancer in about 1986, uterine cancer about 1987, breast cancer within a year, and skin melanoma about 1989, until the curve started a slower rise about 1990 that has continued through 2008 (99.9% survival at 5 years) and likely through the present. I’m thinking that a lot of that improvement was due to PSA screening. The FDA finally approved PSA specifically for screening in 1994, but I’ll bet plenty of docs were using it off-label for screening before that. In any event, I suspect that by 1992 it would have been irresponsible for the Critz group NOT to have used PSA for staging its patients, though 1992 was years before I became a patient and started developing personal familiarity with issues and events, and I have not researched this point.

    Moreover, the paper’s Table 2 includes multivariate analysis results for different PSA levels for the transperineal patients, with results for pretreatment PSA 0 to 4 ng/ml patients serving as the benchmark, and with hazard ratios of 1.967 for patients with PSAs from 4 to 10, 3.839 for PSAs 10 to 20, and 4.786 for PSAs greater than 20. (Table 2 also covers hazard results for stage and Gleason score, both meaningful with increasing hazard ratios for the steps in stage and grade, as well as age and race, neither of which were meaningful.)

    The Critz group actually had PSA data even for most of its retropubic implant patients. Table 1 of the paper shows that 349 such patients had PSA baseline data, which is all but 80 of the 429 retropubic implant patients. My impression is that this paper would have been a lot stronger and less confusing if it had focused solely on the transperineal implant patients, but having the earlier patients did have the advantage of revealing the extent of late side effects, and even late recurrences, though with somewhat diminished value as these patients were treated with the obsolete technology.

    Regarding the staging, the Critz team used the TNM system, with Table 2, for instance, listing stages T1, T2a, T2b, T2c, and T3/T4. While there have been subtle changes over the years, that strikes me as quite understandable and comparable to what we use today.

    Regarding Gleason scoring, yes, there were changes in 2005, but that was well prior to the time when patients in either the Critz or Dattoli groups were scored, so the Gleason standards should have been quite comparable.

    In sum, there appears to be no problem in assigning risk-groups to either the Critz or Dattoli groups.

  12. “What if this is as good as it gets?” (Quote from movie by the same name) – The Value of Comparing Studies (Critz vs. Dattoli)

    Dear Sitemaster,

    I would like to take another viewpoint from the one you expressed on November 16, 2015 at 7:54 am in which you wrote:

    Dear Jim (and Allen): This type of inter-cohort analysis is near to pointless. Both cohorts are inevitably affected by patient selection biases. On top of which, since the two clinical teams use utterly different definitions of freedom from biochemical progression, you can make no reasonable comparison anyway.”

    During the last 31 years of my career, I was involved in often sophisticated cost analysis related to technical and legal issues for Department of Defense research and development programs. Some of the time answers were easy –- simple and clear cut. You just had to go through the right motions and the answer would pop out: “Nothing to it but to do it.” On the other hand, much of the time there was substantial uncertainty, with partial, incomplete, flawed, and conflicting data. That matched the grist for much of my education, which involved the uncertainties of the social sciences rather than the certainties of, for example, physics or mathematics. While the latter fields lean heavily on techniques that yield precise answers, such as calculus (which I survived), my work mostly involved probabilistic analysis tools, particularly statistics. For instance, with such logic and analysis, we can often conclude that Result X is substantially better than Result Y, with a high degree of confidence (but not absolute confidence). We can often observe that one result is superior or more robust than another, while not knowing exactly how good the result is. What I learned from this education and career was that logic and analysis could often point to the best answer or at least reduce the range of options. As a patient with my life on the line for well over a decade, I came to appreciate how that same analysis and probabilistic thinking could aid in interpretation of research and clarify the merits of medical choices. So I come to your comment from an experience and education background that emphasizes logic and probabilistic thinking.

    You stated that “This type of inter-cohort analysis is near to pointless.” (This refers to comparing the Critz and Dattoli studies of combined seeds and EBRT, with ADT for many Dattoli patients.) If the goal is to conclusively establish that one is better than the other and generate a medical guideline, I agree, and I also agree that it is important to strive for conclusive results. However, in the absence of the kind of data that can yield conclusive results –- especially well conceived and executed, large, long-enough term, double-blind, placebo controlled, Phase III trials, and in the harsh reality that such trials are impractical for many of the issues we face, we either can wring our hands or do the best we can with the data we do have, accepting that studies interpreted with probabilistic thinking and analysis are “as good as it gets”.

    You make two specific criticisms of comparing these two studies: differences in patient selection biases and “utterly” different definitions of freedom from biochemical progression. I agree that patient selection biases, or more precisely put “patient selection of doctor” biases, no doubt have some influence on the results and that the degree and direction of the influence on results is uncertain. For instance, the Dattoli team believed in the value of supportive androgen deprivation therapy whereas the Critz team did not; for the Dattoli group of 321 consecutive intermediate and high-risk patients, men with “3 or more risk features (PSA, Gleason score, Clinical Stage, elevated PAP [prostatic acid phosphatase, a focus of other Dattoli research for PSA-era risk assessment] were encouraged to receive hormonal agents and 143 patients received hormones in neoadjuvant or adjuvant fashion, median duration 4 months (maximum 6 months).” For the Critz team, “No evaluated men received neoadjuvant or adjuvant hormone therapy.” Evident in these quotations is another difference: the Dattoli group was keen on getting the best assessment of the case they could before planning their radiation therapy. For instance, they sought expert second opinions of biopsies, and they used PAP results in a pioneering manner to refine stratification of risk.”

    Regarding “different definitions of freedom from biochemical progression”, it is clear that the definitions were different, and also clear, from logic, that the standard definition used by the Dattoli team would have resulted in improved success figures had it been used by the Critz team. However, the large advantages in success achieved in both the intermediate and high-risk groups by the Dattoli team, coupled with the early flattening of the recurrence curves for the Dattoli team contrasted to the late flattening of the curves for the Critz team, all in the context of what we know about the natural history of the disease, suggest that the Dattoli patients benefited from a superior treatment compared to the Critz treatment.

    So where does all this get us? It certainly does not let us conclusively state that the Dattoli approach is superior to the Critz approach, as practiced during the periods of the studies, but it does suggest — I would say it suggests strongly — that the Dattoli approach was superior. Of course, all this is past tense; as Allen pointed out in the original article, “… An even bigger problem is what I call irrelevance. Technological and medical science advances continue at so brisk a pace that the treatment techniques 10 years from now are not likely to resemble anything currently available (another argument for active surveillance, if that’s an option).” At the time the Dattoli study was published in 2010, Drs. Dattoli and Sorace had been using powerful target imaging on a daily basis during treatment as well as target movement control technology — both substantial advances over the technologies at the time patients in their study were radiated. Moreover, we were learning more about the role, timing and amount of hormonal therapy to support radiation for certain patients around 2010.

    Where the Dattoli study got me in 2010 was to see for the first time that radiation therapy could have a high and durable success rate for a patient with high-risk case characteristics like me! Look at it: 74% recurrence free for high-risk patients with 10½ years median follow-up, and the recurrence curves flattening out with virtually no further failures at around the five year point (or 8 year point for Gleason 8)! Who would have thought that possible! Awesome!!! And, these patients were treated with older technology; results with modern technology were highly likely to be even better!

    While the Critz study was not available until 2012 (online), I am sure I would not have had such confidence from the Critz results: 44% recurrence free at 10 years and with a slight decline in subsequent years. Now 44% is pretty good if you don’t have better alternatives, especially if you had expected not to survive to 10 years post treatment in the first place – a fairly common expectation among us high-risk guys in the past decade, but such a result is not so spiffy when there is impressive competition. Though the Critz team has rather consistently used the surgery definition of recurrence instead of one of the more appropriate definitions for radiation, wouldn’t it be nice to see their study republished using one of those standard radiation definitions?

    So I’ll take the position that you can make a “reasonable” (and sound) and valuable, though not conclusive comparison of studies. Actually, I think you kind of agree; as you put it in your comment of November 7, 2015 at 12:21 pm: “There are very distinct differences between the value of long-term follow-up data to individual patients at a specific point in time (when they need to make a very personal decision about how they want to be treated) and the value of long-term follow-up data to the recommended practice of medicine (i.e., should treatment X continue to be used to manage disorder Y?).” I’ll extend that thought on follow-up to comparing studies.

    Thanks, as always, for taking the time to share your thoughts with us.

  13. Dear Jim:

    Let me be extremely clear about this. … The US Food and Drug Administration has never, ever, approved the use of any PSA test for screening of men for risk of prostate cancer.

    The precise language used by the FDA in the August 1994 expanded approval for use of the Hybritech Tandem PSA test reads as follows:

    “This device is indicated for the measurement of serum PSA in conjunction with digital rectal examination (DRE) as an aid in the detection of prostate cancer in men aged 50 years or older.”

    The words “as an aid in the detection” do not in any way imply that the PSA test could or should be used in the regular, mass, population-wide screening of men for their risk of prostate cancer. That use was based almost entirely on a PR campaign instigated by certain companies with the enthusiastic participation of urologists like Dr. William Catalona. I would further point out that it was that use of the PSA test that led Dr. Stamey (in 2004) to publish his seminal paper noting that the value of the PSA test as a diagnostic aid (as first observed in the 1980s) had been destroyed by its over-use as a widespread screening test.

  14. Dear Jim:

    (1) Please do not presume to extend my comments about one issue to apply to other issues. It is highly inaccurate and misleading to others.

    (2) If someone unbiased were to conduct a detailed meta-analysis of the available data on the long-term outcomes of carefully identified subgroups of patients treated by radiotherapy + brachytherapy with or without ADT, that might be interesting. (I emphasize “might” because I think this type of meta-analysis would probably be very hard to do.) That is very different from comparing two sets of data from two radiation oncology provider groups who have spent much of the past 20 years loudly promoting their ways of treating prostate cancer patients directly to the patient community. I have about the same level of comfort with the work of Drs Dattoli and Critz as I do with the data provided by Pat Walsh on the outcomes of his radical prostatectomy patients, most of whom we now know to have been men who could have been managed better on active surveillance. There is a vast amount of data from other centers on this topic that may well be more representative of actuality than the data presented by either Critz or Dattoli because of the way that they were recruiting and selecting patients.

    (3) You reflect your own biases by pointing out that it took you until 2010 to work out that combinations of radiation therapy and ADT could be a very good way to treat high-risk patients with localized and locally advanced prostate cancer. This had been suggested and investigated by others decades earlier, which was why the European trials of radiation therapy + ADT in node-positive disease were carried out starting in the late 1980s. Those trials showed a clear survival benefit for the patients getting ADT at 5 and 10 years.

    The validity of the outcomes of the types of evaluation you carried out professionally depends on the quality of the data that is available and used as the basis for the evaluation. I am sorry, but I often find your ability to assess the quality of the base information you work from to be flawed. And I believe this to be the case in this situation too.

  15. My Patient’s Perspective on Your Last Comment Re Combo RT + ADT

    Dear Sitemaster,

    I thought you might be interested in my patient’s perspective — very well remembered — regarding the points you raised in this paragraph:

    “(3) You reflect your own biases by pointing out that it took you until 2010 to work out that combinations of radiation therapy and ADT could be a very good way to treat high-risk patients with localized and locally advanced prostate cancer. This had been suggested and investigated by others decades earlier, which was why the European trials of radiation therapy + ADT in node-positive disease were carried out starting in the late 1980s. Those trials showed a clear survival benefit for the patients getting ADT at 5 and 10 years.

    I and probably a number of fellow patients like me, with challenging cases suspected to be at least micrometastatic, were well aware of the encouraging studies you refer to. However, when you start with very high-risk prostate cancer, you (and your doctors) can’t help suspecting that the technetium bone scan and routine CT scans have missed something; those scans are known to be not all that sensitive and specific. Also, 1980s/1990s versions of radiation did not inspire full confidence.

    So I was really waiting and hoping that better scans would come along that could target the mets and indicate whether radiation with ADT might do the job. Not only did that happen with the feraheme USPIO lymph node scan (apparently no longer available due to safety concerns) and the Na18F PET/CT bone scan, but the whole notion of treatable oligometastatic cancer emerged — all highly encouraging to me and basically turning on the green light. I was undergoing the advanced scans in 2011 and 2012, but my radiation was delayed until 2013 by technical problems at the facility.

  16. Dear Jim:

    That’s all very well, but your logic (upon which you seem to pride yourself) is flawed. The whole point of the combination of ADT + radiation therapy back in the 1990s was that it actually seemed to result in very long-term remissions associated with a survival benefit in precisely the very high-risk patients with node-positive and potentially micrometastatic disease that you are referring to. By comparison, the number of men who have successfully been put into long-term remission based on radiotherapy of oligometastatic disease seems to be very, very small indeed.

    You therefore took the enormous risk of taking relatively long-term ADT alone (albeit IADT) which could easily have induced early onset CRPC with no substantial evidence that better imaging techniques would actually be developed and no evidence at all that it would be possible to target effectively oligometastatic disease. Evidence that the latter leads to a survival benefit is still absent.

    Please understand that I am not criticizing your choice. Every patient has the right to make his own decisions. However, your choice is not one I could suggest that any other patient take based on the available data (then or now either).

  17. Jim,

    I can only reiterate that the men for whom there was 25 years of data could not have had baseline PSA because the test was not available then. Men in the later group (treatment ran from 1984 to 2000) certainly did have baseline PSA values, but if we’re trying to get to bRFS by D’Amico risk level for men treated 25 years earlier, there is no way you can assume anything relevant — that is pure fantasy.

  18. Allen, replying to your thoughts of 2:32 pm today (at 2:25 pm on the East Coast – where are you, the mid-Atlantic?), the Critz study would have been stronger if the small proportion of men without PSA data had been dropped. My hunch is that the Critz team was trying to make an impression with extra long follow-up even if that meant mixing apples and oranges, as they did with their all-inclusive numbers.

    Please review my post, as I was relying on those in the Critz study WITH risk group characterization – the vast majority of men in the study, which group did get extensive analysis in the paper. I do have a copy of the complete paper if you have questions about it. I too am not fond of fantasy in research.

    I think you would enjoy reviewing the complete paper on the Dattoli study, and I would be interested in your thoughts. I agree with Sitemaster that Dr. Dattoli has not been shy about promoting his work, but I’ve been impressed with what to me looks like solid scientific practice and quality in his published research. As an athlete once famously said, “It ain’t bragging if you can do it!”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: